00:00:00.001 Started by upstream project "autotest-nightly" build number 4275 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3638 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.140 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.141 The recommended git tool is: git 00:00:00.141 using credential 00000000-0000-0000-0000-000000000002 00:00:00.143 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.179 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.260 > git --version # 'git version 2.39.2' 00:00:00.260 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.503 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.515 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.529 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.529 > git config core.sparsecheckout # timeout=10 00:00:07.539 > git read-tree -mu HEAD # timeout=10 00:00:07.556 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.572 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.572 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.685 [Pipeline] Start of Pipeline 00:00:07.700 [Pipeline] library 00:00:07.702 Loading library shm_lib@master 00:00:09.881 Library shm_lib@master is cached. Copying from home. 00:00:09.960 [Pipeline] node 00:00:10.133 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:10.136 [Pipeline] { 00:00:10.156 [Pipeline] catchError 00:00:10.158 [Pipeline] { 00:00:10.176 [Pipeline] wrap 00:00:10.188 [Pipeline] { 00:00:10.200 [Pipeline] stage 00:00:10.202 [Pipeline] { (Prologue) 00:00:10.218 [Pipeline] echo 00:00:10.219 Node: VM-host-SM9 00:00:10.224 [Pipeline] cleanWs 00:00:10.236 [WS-CLEANUP] Deleting project workspace... 00:00:10.236 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.242 [WS-CLEANUP] done 00:00:10.542 [Pipeline] setCustomBuildProperty 00:00:10.604 [Pipeline] httpRequest 00:00:13.632 [Pipeline] echo 00:00:13.633 Sorcerer 10.211.164.20 is dead 00:00:13.642 [Pipeline] httpRequest 00:00:14.733 [Pipeline] echo 00:00:14.735 Sorcerer 10.211.164.101 is alive 00:00:14.746 [Pipeline] retry 00:00:14.747 [Pipeline] { 00:00:14.756 [Pipeline] httpRequest 00:00:14.760 HttpMethod: GET 00:00:14.761 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.761 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.763 Response Code: HTTP/1.1 200 OK 00:00:14.763 Success: Status code 200 is in the accepted range: 200,404 00:00:14.763 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.908 [Pipeline] } 00:00:14.925 [Pipeline] // retry 00:00:14.934 [Pipeline] sh 00:00:15.216 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.235 [Pipeline] httpRequest 00:00:15.747 [Pipeline] echo 00:00:15.748 Sorcerer 10.211.164.101 is alive 00:00:15.756 [Pipeline] retry 00:00:15.758 [Pipeline] { 00:00:15.771 [Pipeline] httpRequest 00:00:15.776 HttpMethod: GET 00:00:15.776 URL: http://10.211.164.101/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:15.777 Sending request to url: http://10.211.164.101/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:15.779 Response Code: HTTP/1.1 200 OK 00:00:15.779 Success: Status code 200 is in the accepted range: 200,404 00:00:15.780 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:32.296 [Pipeline] } 00:00:32.314 [Pipeline] // retry 00:00:32.321 [Pipeline] sh 00:00:32.598 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:35.895 [Pipeline] sh 00:00:36.179 + git -C spdk log --oneline -n5 00:00:36.180 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:36.180 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:36.180 4bcab9fb9 correct kick for CQ full case 00:00:36.180 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:36.180 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:36.201 [Pipeline] writeFile 00:00:36.219 [Pipeline] sh 00:00:36.501 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:36.514 [Pipeline] sh 00:00:36.798 + cat autorun-spdk.conf 00:00:36.798 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.798 SPDK_TEST_NVME=1 00:00:36.798 SPDK_TEST_FTL=1 00:00:36.798 SPDK_TEST_ISAL=1 00:00:36.798 SPDK_RUN_ASAN=1 00:00:36.798 SPDK_RUN_UBSAN=1 00:00:36.798 SPDK_TEST_XNVME=1 00:00:36.798 SPDK_TEST_NVME_FDP=1 00:00:36.798 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:36.805 RUN_NIGHTLY=1 00:00:36.807 [Pipeline] } 00:00:36.821 [Pipeline] // stage 00:00:36.839 [Pipeline] stage 00:00:36.841 [Pipeline] { (Run VM) 00:00:36.856 [Pipeline] sh 00:00:37.137 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:37.137 + echo 'Start stage prepare_nvme.sh' 00:00:37.137 Start stage prepare_nvme.sh 00:00:37.137 + [[ -n 5 ]] 00:00:37.137 + disk_prefix=ex5 00:00:37.137 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:37.137 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:37.137 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:37.137 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.137 ++ SPDK_TEST_NVME=1 00:00:37.137 ++ SPDK_TEST_FTL=1 00:00:37.137 ++ SPDK_TEST_ISAL=1 00:00:37.137 ++ SPDK_RUN_ASAN=1 00:00:37.137 ++ SPDK_RUN_UBSAN=1 00:00:37.137 ++ SPDK_TEST_XNVME=1 00:00:37.137 ++ SPDK_TEST_NVME_FDP=1 00:00:37.137 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:37.137 ++ RUN_NIGHTLY=1 00:00:37.137 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:37.137 + nvme_files=() 00:00:37.137 + declare -A nvme_files 00:00:37.137 + backend_dir=/var/lib/libvirt/images/backends 00:00:37.137 + nvme_files['nvme.img']=5G 00:00:37.137 + nvme_files['nvme-cmb.img']=5G 00:00:37.137 + nvme_files['nvme-multi0.img']=4G 00:00:37.137 + nvme_files['nvme-multi1.img']=4G 00:00:37.137 + nvme_files['nvme-multi2.img']=4G 00:00:37.137 + nvme_files['nvme-openstack.img']=8G 00:00:37.137 + nvme_files['nvme-zns.img']=5G 00:00:37.137 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:37.137 + (( SPDK_TEST_FTL == 1 )) 00:00:37.137 + nvme_files["nvme-ftl.img"]=6G 00:00:37.137 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:37.137 + nvme_files["nvme-fdp.img"]=1G 00:00:37.137 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:37.137 + for nvme in "${!nvme_files[@]}" 00:00:37.137 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:37.137 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.137 + for nvme in "${!nvme_files[@]}" 00:00:37.137 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:00:37.396 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:37.396 + for nvme in "${!nvme_files[@]}" 00:00:37.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:37.396 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.396 + for nvme in "${!nvme_files[@]}" 00:00:37.396 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:37.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:37.655 + for nvme in "${!nvme_files[@]}" 00:00:37.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:37.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.655 + for nvme in "${!nvme_files[@]}" 00:00:37.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:37.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.655 + for nvme in "${!nvme_files[@]}" 00:00:37.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:37.914 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.914 + for nvme in "${!nvme_files[@]}" 00:00:37.914 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:00:37.914 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:37.914 + for nvme in "${!nvme_files[@]}" 00:00:37.914 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:38.172 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.172 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:38.172 + echo 'End stage prepare_nvme.sh' 00:00:38.172 End stage prepare_nvme.sh 00:00:38.184 [Pipeline] sh 00:00:38.468 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:38.468 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:38.728 00:00:38.728 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:38.728 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:38.728 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:38.728 HELP=0 00:00:38.728 DRY_RUN=0 00:00:38.728 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:00:38.728 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:38.728 NVME_AUTO_CREATE=0 00:00:38.728 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:00:38.728 NVME_CMB=,,,, 00:00:38.728 NVME_PMR=,,,, 00:00:38.728 NVME_ZNS=,,,, 00:00:38.728 NVME_MS=true,,,, 00:00:38.728 NVME_FDP=,,,on, 00:00:38.728 SPDK_VAGRANT_DISTRO=fedora39 00:00:38.728 SPDK_VAGRANT_VMCPU=10 00:00:38.728 SPDK_VAGRANT_VMRAM=12288 00:00:38.728 SPDK_VAGRANT_PROVIDER=libvirt 00:00:38.728 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:38.728 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:38.728 SPDK_OPENSTACK_NETWORK=0 00:00:38.728 VAGRANT_PACKAGE_BOX=0 00:00:38.728 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:38.728 FORCE_DISTRO=true 00:00:38.728 VAGRANT_BOX_VERSION= 00:00:38.728 EXTRA_VAGRANTFILES= 00:00:38.728 NIC_MODEL=e1000 00:00:38.728 00:00:38.728 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:38.728 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:42.015 Bringing machine 'default' up with 'libvirt' provider... 00:00:42.273 ==> default: Creating image (snapshot of base box volume). 00:00:42.532 ==> default: Creating domain with the following settings... 00:00:42.532 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731830447_7939c5be91c37f9de3f9 00:00:42.532 ==> default: -- Domain type: kvm 00:00:42.532 ==> default: -- Cpus: 10 00:00:42.532 ==> default: -- Feature: acpi 00:00:42.532 ==> default: -- Feature: apic 00:00:42.532 ==> default: -- Feature: pae 00:00:42.532 ==> default: -- Memory: 12288M 00:00:42.532 ==> default: -- Memory Backing: hugepages: 00:00:42.532 ==> default: -- Management MAC: 00:00:42.532 ==> default: -- Loader: 00:00:42.532 ==> default: -- Nvram: 00:00:42.532 ==> default: -- Base box: spdk/fedora39 00:00:42.532 ==> default: -- Storage pool: default 00:00:42.532 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731830447_7939c5be91c37f9de3f9.img (20G) 00:00:42.532 ==> default: -- Volume Cache: default 00:00:42.532 ==> default: -- Kernel: 00:00:42.532 ==> default: -- Initrd: 00:00:42.532 ==> default: -- Graphics Type: vnc 00:00:42.532 ==> default: -- Graphics Port: -1 00:00:42.532 ==> default: -- Graphics IP: 127.0.0.1 00:00:42.532 ==> default: -- Graphics Password: Not defined 00:00:42.532 ==> default: -- Video Type: cirrus 00:00:42.532 ==> default: -- Video VRAM: 9216 00:00:42.532 ==> default: -- Sound Type: 00:00:42.532 ==> default: -- Keymap: en-us 00:00:42.532 ==> default: -- TPM Path: 00:00:42.532 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:42.532 ==> default: -- Command line args: 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:42.532 ==> default: -> value=-drive, 00:00:42.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:42.532 ==> default: -> value=-drive, 00:00:42.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:42.532 ==> default: -> value=-drive, 00:00:42.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.532 ==> default: -> value=-drive, 00:00:42.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.532 ==> default: -> value=-drive, 00:00:42.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:42.532 ==> default: -> value=-drive, 00:00:42.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:42.532 ==> default: -> value=-device, 00:00:42.532 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.532 ==> default: Creating shared folders metadata... 00:00:42.532 ==> default: Starting domain. 00:00:43.912 ==> default: Waiting for domain to get an IP address... 00:00:58.796 ==> default: Waiting for SSH to become available... 00:01:00.175 ==> default: Configuring and enabling network interfaces... 00:01:04.367 default: SSH address: 192.168.121.198:22 00:01:04.367 default: SSH username: vagrant 00:01:04.367 default: SSH auth method: private key 00:01:06.902 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:15.020 ==> default: Mounting SSHFS shared folder... 00:01:16.399 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:16.399 ==> default: Checking Mount.. 00:01:17.777 ==> default: Folder Successfully Mounted! 00:01:17.777 ==> default: Running provisioner: file... 00:01:18.713 default: ~/.gitconfig => .gitconfig 00:01:18.972 00:01:18.972 SUCCESS! 00:01:18.972 00:01:18.972 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:18.972 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:18.972 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:18.972 00:01:19.005 [Pipeline] } 00:01:19.019 [Pipeline] // stage 00:01:19.027 [Pipeline] dir 00:01:19.028 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:19.029 [Pipeline] { 00:01:19.041 [Pipeline] catchError 00:01:19.043 [Pipeline] { 00:01:19.055 [Pipeline] sh 00:01:19.379 + vagrant ssh-config --host vagrant 00:01:19.379 + sed -ne /^Host/,$p 00:01:19.379 + tee ssh_conf 00:01:21.915 Host vagrant 00:01:21.915 HostName 192.168.121.198 00:01:21.915 User vagrant 00:01:21.915 Port 22 00:01:21.915 UserKnownHostsFile /dev/null 00:01:21.915 StrictHostKeyChecking no 00:01:21.915 PasswordAuthentication no 00:01:21.915 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:21.915 IdentitiesOnly yes 00:01:21.915 LogLevel FATAL 00:01:21.915 ForwardAgent yes 00:01:21.915 ForwardX11 yes 00:01:21.915 00:01:21.930 [Pipeline] withEnv 00:01:21.932 [Pipeline] { 00:01:21.945 [Pipeline] sh 00:01:22.225 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:22.225 source /etc/os-release 00:01:22.225 [[ -e /image.version ]] && img=$(< /image.version) 00:01:22.225 # Minimal, systemd-like check. 00:01:22.225 if [[ -e /.dockerenv ]]; then 00:01:22.225 # Clear garbage from the node's name: 00:01:22.225 # agt-er_autotest_547-896 -> autotest_547-896 00:01:22.225 # $HOSTNAME is the actual container id 00:01:22.225 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:22.225 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:22.225 # We can assume this is a mount from a host where container is running, 00:01:22.225 # so fetch its hostname to easily identify the target swarm worker. 00:01:22.225 container="$(< /etc/hostname) ($agent)" 00:01:22.225 else 00:01:22.225 # Fallback 00:01:22.225 container=$agent 00:01:22.225 fi 00:01:22.225 fi 00:01:22.225 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:22.225 00:01:22.497 [Pipeline] } 00:01:22.514 [Pipeline] // withEnv 00:01:22.523 [Pipeline] setCustomBuildProperty 00:01:22.538 [Pipeline] stage 00:01:22.541 [Pipeline] { (Tests) 00:01:22.557 [Pipeline] sh 00:01:22.838 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:23.111 [Pipeline] sh 00:01:23.392 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:23.666 [Pipeline] timeout 00:01:23.667 Timeout set to expire in 50 min 00:01:23.669 [Pipeline] { 00:01:23.683 [Pipeline] sh 00:01:23.964 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:24.531 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:24.543 [Pipeline] sh 00:01:24.823 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:25.097 [Pipeline] sh 00:01:25.378 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:25.655 [Pipeline] sh 00:01:25.937 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:26.196 ++ readlink -f spdk_repo 00:01:26.196 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:26.196 + [[ -n /home/vagrant/spdk_repo ]] 00:01:26.196 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:26.196 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:26.196 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:26.196 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:26.196 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:26.196 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:26.196 + cd /home/vagrant/spdk_repo 00:01:26.196 + source /etc/os-release 00:01:26.196 ++ NAME='Fedora Linux' 00:01:26.196 ++ VERSION='39 (Cloud Edition)' 00:01:26.196 ++ ID=fedora 00:01:26.196 ++ VERSION_ID=39 00:01:26.196 ++ VERSION_CODENAME= 00:01:26.196 ++ PLATFORM_ID=platform:f39 00:01:26.196 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.196 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.196 ++ LOGO=fedora-logo-icon 00:01:26.196 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.196 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.196 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.196 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.196 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.196 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.196 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.196 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.196 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.196 ++ SUPPORT_END=2024-11-12 00:01:26.196 ++ VARIANT='Cloud Edition' 00:01:26.196 ++ VARIANT_ID=cloud 00:01:26.196 + uname -a 00:01:26.196 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:26.196 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:26.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:26.715 Hugepages 00:01:26.715 node hugesize free / total 00:01:26.715 node0 1048576kB 0 / 0 00:01:26.715 node0 2048kB 0 / 0 00:01:26.715 00:01:26.715 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:26.975 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:26.975 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:26.975 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:26.975 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:26.975 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:26.975 + rm -f /tmp/spdk-ld-path 00:01:26.975 + source autorun-spdk.conf 00:01:26.975 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.975 ++ SPDK_TEST_NVME=1 00:01:26.975 ++ SPDK_TEST_FTL=1 00:01:26.975 ++ SPDK_TEST_ISAL=1 00:01:26.975 ++ SPDK_RUN_ASAN=1 00:01:26.975 ++ SPDK_RUN_UBSAN=1 00:01:26.975 ++ SPDK_TEST_XNVME=1 00:01:26.975 ++ SPDK_TEST_NVME_FDP=1 00:01:26.975 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.975 ++ RUN_NIGHTLY=1 00:01:26.975 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:26.975 + [[ -n '' ]] 00:01:26.975 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:26.975 + for M in /var/spdk/build-*-manifest.txt 00:01:26.975 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:26.975 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.975 + for M in /var/spdk/build-*-manifest.txt 00:01:26.975 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:26.975 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.975 + for M in /var/spdk/build-*-manifest.txt 00:01:26.975 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:26.975 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:26.975 ++ uname 00:01:26.975 + [[ Linux == \L\i\n\u\x ]] 00:01:26.975 + sudo dmesg -T 00:01:26.975 + sudo dmesg --clear 00:01:26.975 + dmesg_pid=5299 00:01:26.975 + [[ Fedora Linux == FreeBSD ]] 00:01:26.975 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.975 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:26.975 + sudo dmesg -Tw 00:01:26.975 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:26.975 + [[ -x /usr/src/fio-static/fio ]] 00:01:26.975 + export FIO_BIN=/usr/src/fio-static/fio 00:01:26.975 + FIO_BIN=/usr/src/fio-static/fio 00:01:26.975 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:26.975 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:26.975 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:26.975 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.975 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:26.975 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:26.975 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.975 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:26.975 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.235 08:01:32 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:27.235 08:01:32 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.235 08:01:32 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:27.235 08:01:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:27.235 08:01:32 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.235 08:01:32 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:27.235 08:01:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:27.235 08:01:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:27.235 08:01:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.235 08:01:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.235 08:01:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.235 08:01:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.235 08:01:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.235 08:01:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.235 08:01:32 -- paths/export.sh@5 -- $ export PATH 00:01:27.235 08:01:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.235 08:01:32 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.235 08:01:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:27.235 08:01:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731830492.XXXXXX 00:01:27.235 08:01:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731830492.mZyRkT 00:01:27.235 08:01:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:27.235 08:01:32 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:27.235 08:01:32 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.235 08:01:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.235 08:01:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.235 08:01:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:27.235 08:01:32 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:27.235 08:01:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.235 08:01:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:27.235 08:01:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:27.235 08:01:32 -- pm/common@17 -- $ local monitor 00:01:27.235 08:01:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.235 08:01:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.235 08:01:32 -- pm/common@25 -- $ sleep 1 00:01:27.235 08:01:32 -- pm/common@21 -- $ date +%s 00:01:27.235 08:01:32 -- pm/common@21 -- $ date +%s 00:01:27.235 08:01:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731830492 00:01:27.235 08:01:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731830492 00:01:27.235 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731830492_collect-vmstat.pm.log 00:01:27.235 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731830492_collect-cpu-load.pm.log 00:01:28.173 08:01:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:28.173 08:01:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.173 08:01:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.173 08:01:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:28.173 08:01:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.173 Sun Nov 17 08:01:33 AM UTC 2024 00:01:28.173 08:01:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.173 v25.01-pre-189-g83e8405e4 00:01:28.173 08:01:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:28.173 08:01:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:28.173 08:01:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.173 08:01:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.173 08:01:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.173 ************************************ 00:01:28.173 START TEST asan 00:01:28.173 ************************************ 00:01:28.173 using asan 00:01:28.173 08:01:33 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:28.173 00:01:28.173 real 0m0.000s 00:01:28.173 user 0m0.000s 00:01:28.173 sys 0m0.000s 00:01:28.174 08:01:33 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.174 08:01:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.174 ************************************ 00:01:28.174 END TEST asan 00:01:28.174 ************************************ 00:01:28.433 08:01:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.433 08:01:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.433 08:01:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.433 08:01:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.433 08:01:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.433 ************************************ 00:01:28.433 START TEST ubsan 00:01:28.433 ************************************ 00:01:28.433 using ubsan 00:01:28.433 08:01:33 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:28.433 00:01:28.433 real 0m0.000s 00:01:28.433 user 0m0.000s 00:01:28.433 sys 0m0.000s 00:01:28.433 08:01:33 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:28.433 08:01:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.433 ************************************ 00:01:28.433 END TEST ubsan 00:01:28.433 ************************************ 00:01:28.433 08:01:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.433 08:01:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.433 08:01:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.433 08:01:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.433 08:01:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.433 08:01:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.433 08:01:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.433 08:01:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.433 08:01:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:28.433 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:28.433 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:29.001 Using 'verbs' RDMA provider 00:01:44.823 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:57.166 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:57.166 Creating mk/config.mk...done. 00:01:57.166 Creating mk/cc.flags.mk...done. 00:01:57.166 Type 'make' to build. 00:01:57.166 08:02:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:57.166 08:02:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.166 08:02:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.166 08:02:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.166 ************************************ 00:01:57.166 START TEST make 00:01:57.166 ************************************ 00:01:57.166 08:02:02 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:57.425 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:57.425 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:57.425 meson setup builddir \ 00:01:57.425 -Dwith-libaio=enabled \ 00:01:57.425 -Dwith-liburing=enabled \ 00:01:57.425 -Dwith-libvfn=disabled \ 00:01:57.425 -Dwith-spdk=disabled \ 00:01:57.425 -Dexamples=false \ 00:01:57.425 -Dtests=false \ 00:01:57.425 -Dtools=false && \ 00:01:57.425 meson compile -C builddir && \ 00:01:57.425 cd -) 00:01:57.425 make[1]: Nothing to be done for 'all'. 00:01:59.960 The Meson build system 00:01:59.960 Version: 1.5.0 00:01:59.960 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:59.960 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:59.960 Build type: native build 00:01:59.960 Project name: xnvme 00:01:59.960 Project version: 0.7.5 00:01:59.960 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:59.960 C linker for the host machine: cc ld.bfd 2.40-14 00:01:59.960 Host machine cpu family: x86_64 00:01:59.960 Host machine cpu: x86_64 00:01:59.960 Message: host_machine.system: linux 00:01:59.960 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:59.960 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:59.960 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:59.961 Run-time dependency threads found: YES 00:01:59.961 Has header "setupapi.h" : NO 00:01:59.961 Has header "linux/blkzoned.h" : YES 00:01:59.961 Has header "linux/blkzoned.h" : YES (cached) 00:01:59.961 Has header "libaio.h" : YES 00:01:59.961 Library aio found: YES 00:01:59.961 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:59.961 Run-time dependency liburing found: YES 2.2 00:01:59.961 Dependency libvfn skipped: feature with-libvfn disabled 00:01:59.961 Found CMake: /usr/bin/cmake (3.27.7) 00:01:59.961 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:01:59.961 Subproject spdk : skipped: feature with-spdk disabled 00:01:59.961 Run-time dependency appleframeworks found: NO (tried framework) 00:01:59.961 Run-time dependency appleframeworks found: NO (tried framework) 00:01:59.961 Library rt found: YES 00:01:59.961 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:59.961 Configuring xnvme_config.h using configuration 00:01:59.961 Configuring xnvme.spec using configuration 00:01:59.961 Run-time dependency bash-completion found: YES 2.11 00:01:59.961 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:59.961 Program cp found: YES (/usr/bin/cp) 00:01:59.961 Build targets in project: 3 00:01:59.961 00:01:59.961 xnvme 0.7.5 00:01:59.961 00:01:59.961 Subprojects 00:01:59.961 spdk : NO Feature 'with-spdk' disabled 00:01:59.961 00:01:59.961 User defined options 00:01:59.961 examples : false 00:01:59.961 tests : false 00:01:59.961 tools : false 00:01:59.961 with-libaio : enabled 00:01:59.961 with-liburing: enabled 00:01:59.961 with-libvfn : disabled 00:01:59.961 with-spdk : disabled 00:01:59.961 00:01:59.961 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.220 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:00.479 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:00.479 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:00.479 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:00.479 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:00.479 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:00.479 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:00.479 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:00.479 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:00.479 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:00.479 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:00.479 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:00.479 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:00.479 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:00.479 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:00.738 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:00.738 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:00.738 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:00.738 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:00.738 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:00.738 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:00.738 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:00.738 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:00.738 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:00.738 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:00.738 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:00.738 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:00.738 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:00.738 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:00.738 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:00.738 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:00.738 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:00.738 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:00.738 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:00.738 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:00.738 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:00.738 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:00.738 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:00.738 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:00.738 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:00.738 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:00.738 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:00.738 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:00.738 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:00.738 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:00.738 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:00.738 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:00.997 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:00.998 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:00.998 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:00.998 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:00.998 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:00.998 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:00.998 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:00.998 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:00.998 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:00.998 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:00.998 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:00.998 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:00.998 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:00.998 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:00.998 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:00.998 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:00.998 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:00.998 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:01.257 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:01.257 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:01.257 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:01.257 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:01.257 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:01.257 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:01.257 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:01.257 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:01.257 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:01.824 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:01.824 [75/76] Linking static target lib/libxnvme.a 00:02:01.824 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:01.824 INFO: autodetecting backend as ninja 00:02:01.824 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:01.824 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:09.942 The Meson build system 00:02:09.942 Version: 1.5.0 00:02:09.942 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:09.942 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:09.942 Build type: native build 00:02:09.942 Program cat found: YES (/usr/bin/cat) 00:02:09.942 Project name: DPDK 00:02:09.942 Project version: 24.03.0 00:02:09.942 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.942 C linker for the host machine: cc ld.bfd 2.40-14 00:02:09.942 Host machine cpu family: x86_64 00:02:09.942 Host machine cpu: x86_64 00:02:09.942 Message: ## Building in Developer Mode ## 00:02:09.942 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.942 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.942 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.942 Program python3 found: YES (/usr/bin/python3) 00:02:09.942 Program cat found: YES (/usr/bin/cat) 00:02:09.942 Compiler for C supports arguments -march=native: YES 00:02:09.942 Checking for size of "void *" : 8 00:02:09.942 Checking for size of "void *" : 8 (cached) 00:02:09.942 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:09.942 Library m found: YES 00:02:09.942 Library numa found: YES 00:02:09.942 Has header "numaif.h" : YES 00:02:09.942 Library fdt found: NO 00:02:09.942 Library execinfo found: NO 00:02:09.942 Has header "execinfo.h" : YES 00:02:09.942 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.942 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.942 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.942 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.942 Run-time dependency openssl found: YES 3.1.1 00:02:09.942 Run-time dependency libpcap found: YES 1.10.4 00:02:09.942 Has header "pcap.h" with dependency libpcap: YES 00:02:09.942 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.942 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.942 Compiler for C supports arguments -Wformat: YES 00:02:09.942 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.942 Compiler for C supports arguments -Wformat-security: NO 00:02:09.942 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.942 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.942 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.942 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.942 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.942 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.942 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.942 Compiler for C supports arguments -Wundef: YES 00:02:09.942 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.942 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.942 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.942 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.942 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.942 Program objdump found: YES (/usr/bin/objdump) 00:02:09.942 Compiler for C supports arguments -mavx512f: YES 00:02:09.942 Checking if "AVX512 checking" compiles: YES 00:02:09.942 Fetching value of define "__SSE4_2__" : 1 00:02:09.942 Fetching value of define "__AES__" : 1 00:02:09.942 Fetching value of define "__AVX__" : 1 00:02:09.942 Fetching value of define "__AVX2__" : 1 00:02:09.942 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.942 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.942 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.942 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.942 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.942 Fetching value of define "__PCLMUL__" : 1 00:02:09.942 Fetching value of define "__RDRND__" : 1 00:02:09.942 Fetching value of define "__RDSEED__" : 1 00:02:09.942 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.942 Fetching value of define "__znver1__" : (undefined) 00:02:09.942 Fetching value of define "__znver2__" : (undefined) 00:02:09.942 Fetching value of define "__znver3__" : (undefined) 00:02:09.942 Fetching value of define "__znver4__" : (undefined) 00:02:09.942 Library asan found: YES 00:02:09.942 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.942 Message: lib/log: Defining dependency "log" 00:02:09.942 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.943 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.943 Library rt found: YES 00:02:09.943 Checking for function "getentropy" : NO 00:02:09.943 Message: lib/eal: Defining dependency "eal" 00:02:09.943 Message: lib/ring: Defining dependency "ring" 00:02:09.943 Message: lib/rcu: Defining dependency "rcu" 00:02:09.943 Message: lib/mempool: Defining dependency "mempool" 00:02:09.943 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.943 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.943 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.943 Compiler for C supports arguments -mpclmul: YES 00:02:09.943 Compiler for C supports arguments -maes: YES 00:02:09.943 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.943 Compiler for C supports arguments -mavx512bw: YES 00:02:09.943 Compiler for C supports arguments -mavx512dq: YES 00:02:09.943 Compiler for C supports arguments -mavx512vl: YES 00:02:09.943 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.943 Compiler for C supports arguments -mavx2: YES 00:02:09.943 Compiler for C supports arguments -mavx: YES 00:02:09.943 Message: lib/net: Defining dependency "net" 00:02:09.943 Message: lib/meter: Defining dependency "meter" 00:02:09.943 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.943 Message: lib/pci: Defining dependency "pci" 00:02:09.943 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.943 Message: lib/hash: Defining dependency "hash" 00:02:09.943 Message: lib/timer: Defining dependency "timer" 00:02:09.943 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.943 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.943 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.943 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.943 Message: lib/power: Defining dependency "power" 00:02:09.943 Message: lib/reorder: Defining dependency "reorder" 00:02:09.943 Message: lib/security: Defining dependency "security" 00:02:09.943 Has header "linux/userfaultfd.h" : YES 00:02:09.943 Has header "linux/vduse.h" : YES 00:02:09.943 Message: lib/vhost: Defining dependency "vhost" 00:02:09.943 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.943 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.943 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.943 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.943 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.943 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.943 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.943 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.943 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.943 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.943 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.943 Configuring doxy-api-html.conf using configuration 00:02:09.943 Configuring doxy-api-man.conf using configuration 00:02:09.943 Program mandb found: YES (/usr/bin/mandb) 00:02:09.943 Program sphinx-build found: NO 00:02:09.943 Configuring rte_build_config.h using configuration 00:02:09.943 Message: 00:02:09.943 ================= 00:02:09.943 Applications Enabled 00:02:09.943 ================= 00:02:09.943 00:02:09.943 apps: 00:02:09.943 00:02:09.943 00:02:09.943 Message: 00:02:09.943 ================= 00:02:09.943 Libraries Enabled 00:02:09.943 ================= 00:02:09.943 00:02:09.943 libs: 00:02:09.943 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.943 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.943 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.943 00:02:09.943 Message: 00:02:09.943 =============== 00:02:09.943 Drivers Enabled 00:02:09.943 =============== 00:02:09.943 00:02:09.943 common: 00:02:09.943 00:02:09.943 bus: 00:02:09.943 pci, vdev, 00:02:09.943 mempool: 00:02:09.943 ring, 00:02:09.943 dma: 00:02:09.943 00:02:09.943 net: 00:02:09.943 00:02:09.943 crypto: 00:02:09.943 00:02:09.943 compress: 00:02:09.943 00:02:09.943 vdpa: 00:02:09.943 00:02:09.943 00:02:09.943 Message: 00:02:09.943 ================= 00:02:09.943 Content Skipped 00:02:09.943 ================= 00:02:09.943 00:02:09.943 apps: 00:02:09.943 dumpcap: explicitly disabled via build config 00:02:09.943 graph: explicitly disabled via build config 00:02:09.943 pdump: explicitly disabled via build config 00:02:09.943 proc-info: explicitly disabled via build config 00:02:09.943 test-acl: explicitly disabled via build config 00:02:09.943 test-bbdev: explicitly disabled via build config 00:02:09.943 test-cmdline: explicitly disabled via build config 00:02:09.943 test-compress-perf: explicitly disabled via build config 00:02:09.943 test-crypto-perf: explicitly disabled via build config 00:02:09.943 test-dma-perf: explicitly disabled via build config 00:02:09.943 test-eventdev: explicitly disabled via build config 00:02:09.943 test-fib: explicitly disabled via build config 00:02:09.943 test-flow-perf: explicitly disabled via build config 00:02:09.943 test-gpudev: explicitly disabled via build config 00:02:09.943 test-mldev: explicitly disabled via build config 00:02:09.943 test-pipeline: explicitly disabled via build config 00:02:09.943 test-pmd: explicitly disabled via build config 00:02:09.943 test-regex: explicitly disabled via build config 00:02:09.943 test-sad: explicitly disabled via build config 00:02:09.943 test-security-perf: explicitly disabled via build config 00:02:09.943 00:02:09.943 libs: 00:02:09.943 argparse: explicitly disabled via build config 00:02:09.943 metrics: explicitly disabled via build config 00:02:09.943 acl: explicitly disabled via build config 00:02:09.943 bbdev: explicitly disabled via build config 00:02:09.943 bitratestats: explicitly disabled via build config 00:02:09.943 bpf: explicitly disabled via build config 00:02:09.943 cfgfile: explicitly disabled via build config 00:02:09.943 distributor: explicitly disabled via build config 00:02:09.943 efd: explicitly disabled via build config 00:02:09.943 eventdev: explicitly disabled via build config 00:02:09.943 dispatcher: explicitly disabled via build config 00:02:09.943 gpudev: explicitly disabled via build config 00:02:09.943 gro: explicitly disabled via build config 00:02:09.943 gso: explicitly disabled via build config 00:02:09.943 ip_frag: explicitly disabled via build config 00:02:09.943 jobstats: explicitly disabled via build config 00:02:09.943 latencystats: explicitly disabled via build config 00:02:09.943 lpm: explicitly disabled via build config 00:02:09.943 member: explicitly disabled via build config 00:02:09.943 pcapng: explicitly disabled via build config 00:02:09.943 rawdev: explicitly disabled via build config 00:02:09.943 regexdev: explicitly disabled via build config 00:02:09.943 mldev: explicitly disabled via build config 00:02:09.943 rib: explicitly disabled via build config 00:02:09.943 sched: explicitly disabled via build config 00:02:09.943 stack: explicitly disabled via build config 00:02:09.943 ipsec: explicitly disabled via build config 00:02:09.943 pdcp: explicitly disabled via build config 00:02:09.943 fib: explicitly disabled via build config 00:02:09.943 port: explicitly disabled via build config 00:02:09.943 pdump: explicitly disabled via build config 00:02:09.943 table: explicitly disabled via build config 00:02:09.943 pipeline: explicitly disabled via build config 00:02:09.943 graph: explicitly disabled via build config 00:02:09.943 node: explicitly disabled via build config 00:02:09.943 00:02:09.943 drivers: 00:02:09.943 common/cpt: not in enabled drivers build config 00:02:09.943 common/dpaax: not in enabled drivers build config 00:02:09.943 common/iavf: not in enabled drivers build config 00:02:09.943 common/idpf: not in enabled drivers build config 00:02:09.943 common/ionic: not in enabled drivers build config 00:02:09.943 common/mvep: not in enabled drivers build config 00:02:09.943 common/octeontx: not in enabled drivers build config 00:02:09.943 bus/auxiliary: not in enabled drivers build config 00:02:09.943 bus/cdx: not in enabled drivers build config 00:02:09.943 bus/dpaa: not in enabled drivers build config 00:02:09.943 bus/fslmc: not in enabled drivers build config 00:02:09.943 bus/ifpga: not in enabled drivers build config 00:02:09.943 bus/platform: not in enabled drivers build config 00:02:09.943 bus/uacce: not in enabled drivers build config 00:02:09.943 bus/vmbus: not in enabled drivers build config 00:02:09.943 common/cnxk: not in enabled drivers build config 00:02:09.943 common/mlx5: not in enabled drivers build config 00:02:09.943 common/nfp: not in enabled drivers build config 00:02:09.943 common/nitrox: not in enabled drivers build config 00:02:09.943 common/qat: not in enabled drivers build config 00:02:09.943 common/sfc_efx: not in enabled drivers build config 00:02:09.943 mempool/bucket: not in enabled drivers build config 00:02:09.943 mempool/cnxk: not in enabled drivers build config 00:02:09.943 mempool/dpaa: not in enabled drivers build config 00:02:09.944 mempool/dpaa2: not in enabled drivers build config 00:02:09.944 mempool/octeontx: not in enabled drivers build config 00:02:09.944 mempool/stack: not in enabled drivers build config 00:02:09.944 dma/cnxk: not in enabled drivers build config 00:02:09.944 dma/dpaa: not in enabled drivers build config 00:02:09.944 dma/dpaa2: not in enabled drivers build config 00:02:09.944 dma/hisilicon: not in enabled drivers build config 00:02:09.944 dma/idxd: not in enabled drivers build config 00:02:09.944 dma/ioat: not in enabled drivers build config 00:02:09.944 dma/skeleton: not in enabled drivers build config 00:02:09.944 net/af_packet: not in enabled drivers build config 00:02:09.944 net/af_xdp: not in enabled drivers build config 00:02:09.944 net/ark: not in enabled drivers build config 00:02:09.944 net/atlantic: not in enabled drivers build config 00:02:09.944 net/avp: not in enabled drivers build config 00:02:09.944 net/axgbe: not in enabled drivers build config 00:02:09.944 net/bnx2x: not in enabled drivers build config 00:02:09.944 net/bnxt: not in enabled drivers build config 00:02:09.944 net/bonding: not in enabled drivers build config 00:02:09.944 net/cnxk: not in enabled drivers build config 00:02:09.944 net/cpfl: not in enabled drivers build config 00:02:09.944 net/cxgbe: not in enabled drivers build config 00:02:09.944 net/dpaa: not in enabled drivers build config 00:02:09.944 net/dpaa2: not in enabled drivers build config 00:02:09.944 net/e1000: not in enabled drivers build config 00:02:09.944 net/ena: not in enabled drivers build config 00:02:09.944 net/enetc: not in enabled drivers build config 00:02:09.944 net/enetfec: not in enabled drivers build config 00:02:09.944 net/enic: not in enabled drivers build config 00:02:09.944 net/failsafe: not in enabled drivers build config 00:02:09.944 net/fm10k: not in enabled drivers build config 00:02:09.944 net/gve: not in enabled drivers build config 00:02:09.944 net/hinic: not in enabled drivers build config 00:02:09.944 net/hns3: not in enabled drivers build config 00:02:09.944 net/i40e: not in enabled drivers build config 00:02:09.944 net/iavf: not in enabled drivers build config 00:02:09.944 net/ice: not in enabled drivers build config 00:02:09.944 net/idpf: not in enabled drivers build config 00:02:09.944 net/igc: not in enabled drivers build config 00:02:09.944 net/ionic: not in enabled drivers build config 00:02:09.944 net/ipn3ke: not in enabled drivers build config 00:02:09.944 net/ixgbe: not in enabled drivers build config 00:02:09.944 net/mana: not in enabled drivers build config 00:02:09.944 net/memif: not in enabled drivers build config 00:02:09.944 net/mlx4: not in enabled drivers build config 00:02:09.944 net/mlx5: not in enabled drivers build config 00:02:09.944 net/mvneta: not in enabled drivers build config 00:02:09.944 net/mvpp2: not in enabled drivers build config 00:02:09.944 net/netvsc: not in enabled drivers build config 00:02:09.944 net/nfb: not in enabled drivers build config 00:02:09.944 net/nfp: not in enabled drivers build config 00:02:09.944 net/ngbe: not in enabled drivers build config 00:02:09.944 net/null: not in enabled drivers build config 00:02:09.944 net/octeontx: not in enabled drivers build config 00:02:09.944 net/octeon_ep: not in enabled drivers build config 00:02:09.944 net/pcap: not in enabled drivers build config 00:02:09.944 net/pfe: not in enabled drivers build config 00:02:09.944 net/qede: not in enabled drivers build config 00:02:09.944 net/ring: not in enabled drivers build config 00:02:09.944 net/sfc: not in enabled drivers build config 00:02:09.944 net/softnic: not in enabled drivers build config 00:02:09.944 net/tap: not in enabled drivers build config 00:02:09.944 net/thunderx: not in enabled drivers build config 00:02:09.944 net/txgbe: not in enabled drivers build config 00:02:09.944 net/vdev_netvsc: not in enabled drivers build config 00:02:09.944 net/vhost: not in enabled drivers build config 00:02:09.944 net/virtio: not in enabled drivers build config 00:02:09.944 net/vmxnet3: not in enabled drivers build config 00:02:09.944 raw/*: missing internal dependency, "rawdev" 00:02:09.944 crypto/armv8: not in enabled drivers build config 00:02:09.944 crypto/bcmfs: not in enabled drivers build config 00:02:09.944 crypto/caam_jr: not in enabled drivers build config 00:02:09.944 crypto/ccp: not in enabled drivers build config 00:02:09.944 crypto/cnxk: not in enabled drivers build config 00:02:09.944 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.944 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.944 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.944 crypto/mlx5: not in enabled drivers build config 00:02:09.944 crypto/mvsam: not in enabled drivers build config 00:02:09.944 crypto/nitrox: not in enabled drivers build config 00:02:09.944 crypto/null: not in enabled drivers build config 00:02:09.944 crypto/octeontx: not in enabled drivers build config 00:02:09.944 crypto/openssl: not in enabled drivers build config 00:02:09.944 crypto/scheduler: not in enabled drivers build config 00:02:09.944 crypto/uadk: not in enabled drivers build config 00:02:09.944 crypto/virtio: not in enabled drivers build config 00:02:09.944 compress/isal: not in enabled drivers build config 00:02:09.944 compress/mlx5: not in enabled drivers build config 00:02:09.944 compress/nitrox: not in enabled drivers build config 00:02:09.944 compress/octeontx: not in enabled drivers build config 00:02:09.944 compress/zlib: not in enabled drivers build config 00:02:09.944 regex/*: missing internal dependency, "regexdev" 00:02:09.944 ml/*: missing internal dependency, "mldev" 00:02:09.944 vdpa/ifc: not in enabled drivers build config 00:02:09.944 vdpa/mlx5: not in enabled drivers build config 00:02:09.944 vdpa/nfp: not in enabled drivers build config 00:02:09.944 vdpa/sfc: not in enabled drivers build config 00:02:09.944 event/*: missing internal dependency, "eventdev" 00:02:09.944 baseband/*: missing internal dependency, "bbdev" 00:02:09.944 gpu/*: missing internal dependency, "gpudev" 00:02:09.944 00:02:09.944 00:02:10.203 Build targets in project: 85 00:02:10.203 00:02:10.203 DPDK 24.03.0 00:02:10.203 00:02:10.203 User defined options 00:02:10.203 buildtype : debug 00:02:10.203 default_library : shared 00:02:10.203 libdir : lib 00:02:10.203 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:10.203 b_sanitize : address 00:02:10.203 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:10.203 c_link_args : 00:02:10.203 cpu_instruction_set: native 00:02:10.203 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:10.203 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:10.203 enable_docs : false 00:02:10.203 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:10.203 enable_kmods : false 00:02:10.203 max_lcores : 128 00:02:10.203 tests : false 00:02:10.204 00:02:10.204 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.772 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:10.772 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:10.772 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:11.031 [3/268] Linking static target lib/librte_kvargs.a 00:02:11.031 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:11.031 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:11.031 [6/268] Linking static target lib/librte_log.a 00:02:11.599 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.599 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:11.599 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.599 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.599 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.599 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:11.599 [13/268] Linking static target lib/librte_telemetry.a 00:02:11.599 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:11.858 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:11.858 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.858 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.858 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.116 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.116 [20/268] Linking target lib/librte_log.so.24.1 00:02:12.376 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:12.376 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:12.376 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:12.635 [24/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.635 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:12.635 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:12.635 [27/268] Linking target lib/librte_telemetry.so.24.1 00:02:12.635 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:12.635 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:12.635 [30/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:12.635 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:12.635 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.635 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:12.894 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:12.894 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.894 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.152 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.411 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.670 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.670 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.670 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.670 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.670 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.670 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.670 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.670 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.929 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.929 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.929 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.188 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.188 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.446 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.446 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.446 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.705 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.705 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:14.705 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.705 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.705 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.964 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.964 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.964 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.222 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.222 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.481 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.481 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.481 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.740 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.998 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.999 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.999 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.999 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.999 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.999 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.999 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.257 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.516 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.516 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.516 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.516 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.516 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.775 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.775 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.775 [84/268] Linking static target lib/librte_ring.a 00:02:16.775 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.034 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.034 [87/268] Linking static target lib/librte_eal.a 00:02:17.034 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.034 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.292 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.292 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.292 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.292 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.292 [94/268] Linking static target lib/librte_mempool.a 00:02:17.551 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.551 [96/268] Linking static target lib/librte_rcu.a 00:02:17.551 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.810 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.810 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.810 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.069 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.069 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.069 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.327 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.327 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.327 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.327 [107/268] Linking static target lib/librte_mbuf.a 00:02:18.586 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.846 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.846 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.846 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.846 [112/268] Linking static target lib/librte_net.a 00:02:18.846 [113/268] Linking static target lib/librte_meter.a 00:02:18.846 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.846 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.846 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.104 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.363 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.363 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.622 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.622 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.622 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.881 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.140 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.140 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.140 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.140 [127/268] Linking static target lib/librte_pci.a 00:02:20.399 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.399 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.399 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.399 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.658 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.658 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.658 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.658 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.658 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.658 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.658 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.658 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.658 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.918 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.918 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.918 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.918 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:21.177 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.177 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.177 [147/268] Linking static target lib/librte_cmdline.a 00:02:21.177 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.436 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.436 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.436 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.694 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.955 [153/268] Linking static target lib/librte_timer.a 00:02:21.955 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:21.955 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.955 [156/268] Linking static target lib/librte_ethdev.a 00:02:22.217 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.217 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.217 [159/268] Linking static target lib/librte_compressdev.a 00:02:22.217 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.217 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.476 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.476 [163/268] Linking static target lib/librte_hash.a 00:02:22.476 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.476 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.735 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.994 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.994 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.994 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:22.994 [170/268] Linking static target lib/librte_dmadev.a 00:02:22.994 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.994 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.253 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:23.253 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:23.512 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.512 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:23.771 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:23.771 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.771 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:23.771 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.030 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.030 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.030 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.030 [184/268] Linking static target lib/librte_cryptodev.a 00:02:24.290 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.290 [186/268] Linking static target lib/librte_power.a 00:02:24.549 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.549 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.549 [189/268] Linking static target lib/librte_reorder.a 00:02:24.807 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.807 [191/268] Linking static target lib/librte_security.a 00:02:24.807 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.066 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.066 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.325 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.584 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.584 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.842 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.101 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.101 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.101 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.101 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.360 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.618 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.618 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.618 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.877 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.877 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.877 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.135 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.135 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.135 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.394 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.394 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.394 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:27.394 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.394 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.394 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.394 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:27.394 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:27.394 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:27.653 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.653 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.653 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.653 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:27.653 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.912 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.481 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.740 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.740 [230/268] Linking target lib/librte_eal.so.24.1 00:02:28.999 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:28.999 [232/268] Linking target lib/librte_meter.so.24.1 00:02:28.999 [233/268] Linking target lib/librte_pci.so.24.1 00:02:28.999 [234/268] Linking target lib/librte_ring.so.24.1 00:02:28.999 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:28.999 [236/268] Linking target lib/librte_timer.so.24.1 00:02:28.999 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:29.258 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:29.258 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:29.258 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:29.258 [241/268] Linking target lib/librte_mempool.so.24.1 00:02:29.258 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:29.258 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:29.258 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:29.258 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:29.258 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:29.258 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:29.517 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:29.517 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:29.517 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:29.517 [251/268] Linking target lib/librte_net.so.24.1 00:02:29.517 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:29.517 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:29.517 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:29.775 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:29.775 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:29.775 [257/268] Linking target lib/librte_hash.so.24.1 00:02:29.775 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:29.775 [259/268] Linking target lib/librte_security.so.24.1 00:02:29.775 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.034 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:30.034 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:30.034 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:30.293 [264/268] Linking target lib/librte_power.so.24.1 00:02:32.199 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.199 [266/268] Linking static target lib/librte_vhost.a 00:02:34.107 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.107 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:34.107 INFO: autodetecting backend as ninja 00:02:34.107 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:52.199 CC lib/log/log.o 00:02:52.199 CC lib/log/log_flags.o 00:02:52.199 CC lib/log/log_deprecated.o 00:02:52.199 CC lib/ut/ut.o 00:02:52.199 CC lib/ut_mock/mock.o 00:02:52.199 LIB libspdk_log.a 00:02:52.199 LIB libspdk_ut_mock.a 00:02:52.199 LIB libspdk_ut.a 00:02:52.199 SO libspdk_ut.so.2.0 00:02:52.199 SO libspdk_ut_mock.so.6.0 00:02:52.199 SO libspdk_log.so.7.1 00:02:52.199 SYMLINK libspdk_ut_mock.so 00:02:52.199 SYMLINK libspdk_ut.so 00:02:52.199 SYMLINK libspdk_log.so 00:02:52.199 CC lib/util/base64.o 00:02:52.199 CC lib/util/bit_array.o 00:02:52.199 CC lib/ioat/ioat.o 00:02:52.199 CC lib/util/crc16.o 00:02:52.199 CC lib/dma/dma.o 00:02:52.199 CC lib/util/crc32.o 00:02:52.199 CC lib/util/cpuset.o 00:02:52.199 CXX lib/trace_parser/trace.o 00:02:52.199 CC lib/util/crc32c.o 00:02:52.199 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.199 CC lib/vfio_user/host/vfio_user.o 00:02:52.199 CC lib/util/crc32_ieee.o 00:02:52.199 CC lib/util/crc64.o 00:02:52.199 CC lib/util/dif.o 00:02:52.199 LIB libspdk_dma.a 00:02:52.199 CC lib/util/fd.o 00:02:52.199 CC lib/util/fd_group.o 00:02:52.199 SO libspdk_dma.so.5.0 00:02:52.459 CC lib/util/file.o 00:02:52.459 SYMLINK libspdk_dma.so 00:02:52.459 CC lib/util/hexlify.o 00:02:52.459 CC lib/util/iov.o 00:02:52.459 LIB libspdk_ioat.a 00:02:52.459 SO libspdk_ioat.so.7.0 00:02:52.459 CC lib/util/math.o 00:02:52.459 CC lib/util/net.o 00:02:52.459 LIB libspdk_vfio_user.a 00:02:52.459 SYMLINK libspdk_ioat.so 00:02:52.459 CC lib/util/pipe.o 00:02:52.459 SO libspdk_vfio_user.so.5.0 00:02:52.459 CC lib/util/strerror_tls.o 00:02:52.459 CC lib/util/string.o 00:02:52.459 SYMLINK libspdk_vfio_user.so 00:02:52.459 CC lib/util/uuid.o 00:02:52.459 CC lib/util/xor.o 00:02:52.459 CC lib/util/zipf.o 00:02:52.716 CC lib/util/md5.o 00:02:52.975 LIB libspdk_util.a 00:02:52.975 SO libspdk_util.so.10.1 00:02:53.234 LIB libspdk_trace_parser.a 00:02:53.234 SYMLINK libspdk_util.so 00:02:53.234 SO libspdk_trace_parser.so.6.0 00:02:53.234 SYMLINK libspdk_trace_parser.so 00:02:53.234 CC lib/vmd/vmd.o 00:02:53.234 CC lib/vmd/led.o 00:02:53.234 CC lib/json/json_parse.o 00:02:53.234 CC lib/rdma_utils/rdma_utils.o 00:02:53.234 CC lib/conf/conf.o 00:02:53.234 CC lib/json/json_util.o 00:02:53.234 CC lib/env_dpdk/env.o 00:02:53.234 CC lib/json/json_write.o 00:02:53.234 CC lib/idxd/idxd.o 00:02:53.234 CC lib/env_dpdk/memory.o 00:02:53.493 CC lib/env_dpdk/pci.o 00:02:53.493 LIB libspdk_conf.a 00:02:53.752 CC lib/env_dpdk/init.o 00:02:53.752 CC lib/env_dpdk/threads.o 00:02:53.752 SO libspdk_conf.so.6.0 00:02:53.752 LIB libspdk_rdma_utils.a 00:02:53.752 LIB libspdk_json.a 00:02:53.752 SO libspdk_rdma_utils.so.1.0 00:02:53.752 SYMLINK libspdk_conf.so 00:02:53.752 CC lib/env_dpdk/pci_ioat.o 00:02:53.752 SO libspdk_json.so.6.0 00:02:53.752 SYMLINK libspdk_rdma_utils.so 00:02:53.752 CC lib/idxd/idxd_user.o 00:02:53.752 SYMLINK libspdk_json.so 00:02:53.752 CC lib/env_dpdk/pci_virtio.o 00:02:53.752 CC lib/idxd/idxd_kernel.o 00:02:53.752 CC lib/env_dpdk/pci_vmd.o 00:02:54.021 CC lib/env_dpdk/pci_idxd.o 00:02:54.021 CC lib/env_dpdk/pci_event.o 00:02:54.021 CC lib/env_dpdk/sigbus_handler.o 00:02:54.021 CC lib/env_dpdk/pci_dpdk.o 00:02:54.021 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.312 CC lib/rdma_provider/common.o 00:02:54.312 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.312 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.312 LIB libspdk_idxd.a 00:02:54.312 LIB libspdk_vmd.a 00:02:54.312 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:54.312 SO libspdk_idxd.so.12.1 00:02:54.312 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.312 SO libspdk_vmd.so.6.0 00:02:54.312 SYMLINK libspdk_idxd.so 00:02:54.312 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.312 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.312 SYMLINK libspdk_vmd.so 00:02:54.592 LIB libspdk_rdma_provider.a 00:02:54.592 SO libspdk_rdma_provider.so.7.0 00:02:54.592 LIB libspdk_jsonrpc.a 00:02:54.592 SYMLINK libspdk_rdma_provider.so 00:02:54.592 SO libspdk_jsonrpc.so.6.0 00:02:54.592 SYMLINK libspdk_jsonrpc.so 00:02:54.865 CC lib/rpc/rpc.o 00:02:55.142 LIB libspdk_rpc.a 00:02:55.142 LIB libspdk_env_dpdk.a 00:02:55.142 SO libspdk_rpc.so.6.0 00:02:55.142 SYMLINK libspdk_rpc.so 00:02:55.142 SO libspdk_env_dpdk.so.15.1 00:02:55.412 SYMLINK libspdk_env_dpdk.so 00:02:55.412 CC lib/keyring/keyring.o 00:02:55.412 CC lib/keyring/keyring_rpc.o 00:02:55.412 CC lib/notify/notify.o 00:02:55.412 CC lib/notify/notify_rpc.o 00:02:55.412 CC lib/trace/trace.o 00:02:55.412 CC lib/trace/trace_flags.o 00:02:55.412 CC lib/trace/trace_rpc.o 00:02:55.672 LIB libspdk_notify.a 00:02:55.672 SO libspdk_notify.so.6.0 00:02:55.672 SYMLINK libspdk_notify.so 00:02:55.672 LIB libspdk_trace.a 00:02:55.672 LIB libspdk_keyring.a 00:02:55.672 SO libspdk_trace.so.11.0 00:02:55.672 SO libspdk_keyring.so.2.0 00:02:55.931 SYMLINK libspdk_keyring.so 00:02:55.931 SYMLINK libspdk_trace.so 00:02:56.190 CC lib/sock/sock.o 00:02:56.190 CC lib/sock/sock_rpc.o 00:02:56.190 CC lib/thread/thread.o 00:02:56.190 CC lib/thread/iobuf.o 00:02:56.450 LIB libspdk_sock.a 00:02:56.709 SO libspdk_sock.so.10.0 00:02:56.709 SYMLINK libspdk_sock.so 00:02:56.968 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.968 CC lib/nvme/nvme_ctrlr.o 00:02:56.968 CC lib/nvme/nvme_fabric.o 00:02:56.968 CC lib/nvme/nvme_ns.o 00:02:56.968 CC lib/nvme/nvme_ns_cmd.o 00:02:56.968 CC lib/nvme/nvme_pcie_common.o 00:02:56.968 CC lib/nvme/nvme_pcie.o 00:02:56.968 CC lib/nvme/nvme_qpair.o 00:02:56.968 CC lib/nvme/nvme.o 00:02:57.903 CC lib/nvme/nvme_quirks.o 00:02:57.903 CC lib/nvme/nvme_transport.o 00:02:57.903 CC lib/nvme/nvme_discovery.o 00:02:57.903 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.903 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:58.162 CC lib/nvme/nvme_tcp.o 00:02:58.162 LIB libspdk_thread.a 00:02:58.162 CC lib/nvme/nvme_opal.o 00:02:58.162 SO libspdk_thread.so.11.0 00:02:58.162 SYMLINK libspdk_thread.so 00:02:58.162 CC lib/nvme/nvme_io_msg.o 00:02:58.162 CC lib/nvme/nvme_poll_group.o 00:02:58.421 CC lib/nvme/nvme_zns.o 00:02:58.680 CC lib/nvme/nvme_stubs.o 00:02:58.680 CC lib/nvme/nvme_auth.o 00:02:58.680 CC lib/accel/accel.o 00:02:58.939 CC lib/blob/blobstore.o 00:02:58.939 CC lib/accel/accel_rpc.o 00:02:58.939 CC lib/accel/accel_sw.o 00:02:58.939 CC lib/nvme/nvme_cuse.o 00:02:59.197 CC lib/nvme/nvme_rdma.o 00:02:59.197 CC lib/blob/request.o 00:02:59.197 CC lib/init/json_config.o 00:02:59.197 CC lib/init/subsystem.o 00:02:59.456 CC lib/init/subsystem_rpc.o 00:02:59.456 CC lib/init/rpc.o 00:02:59.714 CC lib/blob/zeroes.o 00:02:59.714 LIB libspdk_init.a 00:02:59.714 CC lib/virtio/virtio.o 00:02:59.714 SO libspdk_init.so.6.0 00:02:59.714 CC lib/virtio/virtio_vhost_user.o 00:02:59.714 SYMLINK libspdk_init.so 00:02:59.714 CC lib/virtio/virtio_vfio_user.o 00:02:59.973 CC lib/blob/blob_bs_dev.o 00:02:59.973 CC lib/fsdev/fsdev.o 00:02:59.973 CC lib/virtio/virtio_pci.o 00:02:59.973 CC lib/fsdev/fsdev_io.o 00:03:00.232 CC lib/fsdev/fsdev_rpc.o 00:03:00.232 CC lib/event/app.o 00:03:00.232 CC lib/event/reactor.o 00:03:00.232 LIB libspdk_accel.a 00:03:00.232 CC lib/event/log_rpc.o 00:03:00.232 SO libspdk_accel.so.16.0 00:03:00.232 CC lib/event/app_rpc.o 00:03:00.232 SYMLINK libspdk_accel.so 00:03:00.232 CC lib/event/scheduler_static.o 00:03:00.490 LIB libspdk_virtio.a 00:03:00.490 SO libspdk_virtio.so.7.0 00:03:00.490 SYMLINK libspdk_virtio.so 00:03:00.490 CC lib/bdev/bdev.o 00:03:00.490 CC lib/bdev/bdev_rpc.o 00:03:00.490 CC lib/bdev/part.o 00:03:00.490 CC lib/bdev/bdev_zone.o 00:03:00.490 CC lib/bdev/scsi_nvme.o 00:03:00.749 LIB libspdk_event.a 00:03:00.749 LIB libspdk_nvme.a 00:03:00.749 LIB libspdk_fsdev.a 00:03:00.749 SO libspdk_event.so.14.0 00:03:00.749 SO libspdk_fsdev.so.2.0 00:03:00.749 SYMLINK libspdk_fsdev.so 00:03:00.749 SYMLINK libspdk_event.so 00:03:01.008 SO libspdk_nvme.so.15.0 00:03:01.008 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:01.267 SYMLINK libspdk_nvme.so 00:03:01.834 LIB libspdk_fuse_dispatcher.a 00:03:01.834 SO libspdk_fuse_dispatcher.so.1.0 00:03:01.834 SYMLINK libspdk_fuse_dispatcher.so 00:03:02.771 LIB libspdk_blob.a 00:03:03.030 SO libspdk_blob.so.11.0 00:03:03.030 SYMLINK libspdk_blob.so 00:03:03.289 CC lib/lvol/lvol.o 00:03:03.289 CC lib/blobfs/tree.o 00:03:03.289 CC lib/blobfs/blobfs.o 00:03:03.856 LIB libspdk_bdev.a 00:03:03.856 SO libspdk_bdev.so.17.0 00:03:03.856 SYMLINK libspdk_bdev.so 00:03:04.115 CC lib/ublk/ublk.o 00:03:04.115 CC lib/ublk/ublk_rpc.o 00:03:04.115 CC lib/nvmf/ctrlr.o 00:03:04.115 CC lib/scsi/dev.o 00:03:04.115 CC lib/nvmf/ctrlr_discovery.o 00:03:04.115 CC lib/nvmf/ctrlr_bdev.o 00:03:04.115 CC lib/ftl/ftl_core.o 00:03:04.115 CC lib/nbd/nbd.o 00:03:04.382 LIB libspdk_blobfs.a 00:03:04.382 SO libspdk_blobfs.so.10.0 00:03:04.382 SYMLINK libspdk_blobfs.so 00:03:04.382 CC lib/nbd/nbd_rpc.o 00:03:04.382 CC lib/scsi/lun.o 00:03:04.382 CC lib/scsi/port.o 00:03:04.382 LIB libspdk_lvol.a 00:03:04.642 SO libspdk_lvol.so.10.0 00:03:04.642 SYMLINK libspdk_lvol.so 00:03:04.642 CC lib/ftl/ftl_init.o 00:03:04.642 CC lib/ftl/ftl_layout.o 00:03:04.642 CC lib/nvmf/subsystem.o 00:03:04.642 CC lib/ftl/ftl_debug.o 00:03:04.642 LIB libspdk_nbd.a 00:03:04.642 SO libspdk_nbd.so.7.0 00:03:04.901 CC lib/ftl/ftl_io.o 00:03:04.901 CC lib/scsi/scsi.o 00:03:04.901 SYMLINK libspdk_nbd.so 00:03:04.901 CC lib/nvmf/nvmf.o 00:03:04.901 CC lib/ftl/ftl_sb.o 00:03:04.901 CC lib/scsi/scsi_bdev.o 00:03:04.901 CC lib/scsi/scsi_pr.o 00:03:04.901 CC lib/ftl/ftl_l2p.o 00:03:05.159 CC lib/ftl/ftl_l2p_flat.o 00:03:05.159 LIB libspdk_ublk.a 00:03:05.159 CC lib/ftl/ftl_nv_cache.o 00:03:05.159 SO libspdk_ublk.so.3.0 00:03:05.159 CC lib/ftl/ftl_band.o 00:03:05.159 SYMLINK libspdk_ublk.so 00:03:05.159 CC lib/ftl/ftl_band_ops.o 00:03:05.159 CC lib/ftl/ftl_writer.o 00:03:05.159 CC lib/nvmf/nvmf_rpc.o 00:03:05.417 CC lib/scsi/scsi_rpc.o 00:03:05.417 CC lib/ftl/ftl_rq.o 00:03:05.417 CC lib/ftl/ftl_reloc.o 00:03:05.417 CC lib/ftl/ftl_l2p_cache.o 00:03:05.676 CC lib/scsi/task.o 00:03:05.676 CC lib/ftl/ftl_p2l.o 00:03:05.676 CC lib/ftl/ftl_p2l_log.o 00:03:05.935 LIB libspdk_scsi.a 00:03:05.935 SO libspdk_scsi.so.9.0 00:03:05.935 CC lib/nvmf/transport.o 00:03:05.935 CC lib/nvmf/tcp.o 00:03:05.935 SYMLINK libspdk_scsi.so 00:03:05.935 CC lib/nvmf/stubs.o 00:03:06.194 CC lib/iscsi/conn.o 00:03:06.194 CC lib/nvmf/mdns_server.o 00:03:06.194 CC lib/nvmf/rdma.o 00:03:06.194 CC lib/vhost/vhost.o 00:03:06.452 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.452 CC lib/nvmf/auth.o 00:03:06.452 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:06.452 CC lib/iscsi/init_grp.o 00:03:06.711 CC lib/vhost/vhost_rpc.o 00:03:06.711 CC lib/iscsi/iscsi.o 00:03:06.711 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:06.970 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:06.970 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:06.970 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.970 CC lib/vhost/vhost_scsi.o 00:03:07.229 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:07.229 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:07.229 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:07.229 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:07.229 CC lib/iscsi/param.o 00:03:07.487 CC lib/iscsi/portal_grp.o 00:03:07.487 CC lib/iscsi/tgt_node.o 00:03:07.487 CC lib/vhost/vhost_blk.o 00:03:07.487 CC lib/vhost/rte_vhost_user.o 00:03:07.746 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:07.746 CC lib/iscsi/iscsi_subsystem.o 00:03:07.746 CC lib/iscsi/iscsi_rpc.o 00:03:08.005 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.005 CC lib/iscsi/task.o 00:03:08.005 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.263 CC lib/ftl/utils/ftl_conf.o 00:03:08.263 CC lib/ftl/utils/ftl_md.o 00:03:08.263 CC lib/ftl/utils/ftl_mempool.o 00:03:08.263 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.263 CC lib/ftl/utils/ftl_property.o 00:03:08.263 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.522 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.522 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.522 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.522 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.522 LIB libspdk_iscsi.a 00:03:08.522 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.780 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.780 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.780 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.780 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.780 SO libspdk_iscsi.so.8.0 00:03:08.780 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.780 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.780 LIB libspdk_vhost.a 00:03:08.780 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.780 CC lib/ftl/base/ftl_base_dev.o 00:03:08.780 SYMLINK libspdk_iscsi.so 00:03:08.780 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.780 SO libspdk_vhost.so.8.0 00:03:09.039 CC lib/ftl/ftl_trace.o 00:03:09.040 SYMLINK libspdk_vhost.so 00:03:09.040 LIB libspdk_nvmf.a 00:03:09.299 LIB libspdk_ftl.a 00:03:09.299 SO libspdk_nvmf.so.20.0 00:03:09.558 SO libspdk_ftl.so.9.0 00:03:09.558 SYMLINK libspdk_nvmf.so 00:03:09.817 SYMLINK libspdk_ftl.so 00:03:10.077 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.077 CC module/blob/bdev/blob_bdev.o 00:03:10.077 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.077 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.077 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.077 CC module/sock/posix/posix.o 00:03:10.077 CC module/keyring/linux/keyring.o 00:03:10.077 CC module/fsdev/aio/fsdev_aio.o 00:03:10.077 CC module/accel/error/accel_error.o 00:03:10.077 CC module/keyring/file/keyring.o 00:03:10.077 LIB libspdk_env_dpdk_rpc.a 00:03:10.077 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.335 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.335 CC module/accel/error/accel_error_rpc.o 00:03:10.335 CC module/keyring/linux/keyring_rpc.o 00:03:10.335 LIB libspdk_scheduler_gscheduler.a 00:03:10.335 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.335 CC module/keyring/file/keyring_rpc.o 00:03:10.335 SO libspdk_scheduler_gscheduler.so.4.0 00:03:10.335 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:10.335 LIB libspdk_scheduler_dynamic.a 00:03:10.335 SO libspdk_scheduler_dynamic.so.4.0 00:03:10.335 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.335 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.335 LIB libspdk_accel_error.a 00:03:10.335 LIB libspdk_keyring_linux.a 00:03:10.335 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.335 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:10.335 LIB libspdk_blob_bdev.a 00:03:10.335 LIB libspdk_keyring_file.a 00:03:10.335 SO libspdk_keyring_linux.so.1.0 00:03:10.335 SO libspdk_accel_error.so.2.0 00:03:10.593 SO libspdk_blob_bdev.so.11.0 00:03:10.593 SO libspdk_keyring_file.so.2.0 00:03:10.593 SYMLINK libspdk_keyring_linux.so 00:03:10.593 SYMLINK libspdk_accel_error.so 00:03:10.593 SYMLINK libspdk_blob_bdev.so 00:03:10.593 CC module/fsdev/aio/linux_aio_mgr.o 00:03:10.593 SYMLINK libspdk_keyring_file.so 00:03:10.594 CC module/accel/ioat/accel_ioat.o 00:03:10.594 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.594 CC module/accel/iaa/accel_iaa.o 00:03:10.594 CC module/accel/dsa/accel_dsa.o 00:03:10.594 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.852 CC module/accel/iaa/accel_iaa_rpc.o 00:03:10.852 LIB libspdk_accel_ioat.a 00:03:10.852 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.852 SO libspdk_accel_ioat.so.6.0 00:03:10.852 CC module/bdev/delay/vbdev_delay.o 00:03:10.852 SYMLINK libspdk_accel_ioat.so 00:03:10.852 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:10.852 CC module/bdev/error/vbdev_error.o 00:03:10.852 LIB libspdk_accel_iaa.a 00:03:10.852 CC module/bdev/gpt/gpt.o 00:03:10.852 LIB libspdk_accel_dsa.a 00:03:10.852 SO libspdk_accel_iaa.so.3.0 00:03:11.111 SO libspdk_accel_dsa.so.5.0 00:03:11.111 LIB libspdk_fsdev_aio.a 00:03:11.111 SYMLINK libspdk_accel_iaa.so 00:03:11.111 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.111 SO libspdk_fsdev_aio.so.1.0 00:03:11.111 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.111 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.111 SYMLINK libspdk_accel_dsa.so 00:03:11.111 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.111 LIB libspdk_sock_posix.a 00:03:11.111 LIB libspdk_blobfs_bdev.a 00:03:11.111 SO libspdk_sock_posix.so.6.0 00:03:11.111 SYMLINK libspdk_fsdev_aio.so 00:03:11.111 SO libspdk_blobfs_bdev.so.6.0 00:03:11.111 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.111 SYMLINK libspdk_blobfs_bdev.so 00:03:11.111 SYMLINK libspdk_sock_posix.so 00:03:11.370 LIB libspdk_bdev_error.a 00:03:11.370 SO libspdk_bdev_error.so.6.0 00:03:11.370 CC module/bdev/malloc/bdev_malloc.o 00:03:11.370 SYMLINK libspdk_bdev_error.so 00:03:11.370 CC module/bdev/nvme/bdev_nvme.o 00:03:11.370 LIB libspdk_bdev_delay.a 00:03:11.370 CC module/bdev/null/bdev_null.o 00:03:11.370 SO libspdk_bdev_delay.so.6.0 00:03:11.370 LIB libspdk_bdev_gpt.a 00:03:11.370 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.370 SO libspdk_bdev_gpt.so.6.0 00:03:11.370 SYMLINK libspdk_bdev_delay.so 00:03:11.370 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.370 CC module/bdev/raid/bdev_raid.o 00:03:11.629 CC module/bdev/split/vbdev_split.o 00:03:11.629 SYMLINK libspdk_bdev_gpt.so 00:03:11.629 CC module/bdev/null/bdev_null_rpc.o 00:03:11.629 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:11.629 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.629 LIB libspdk_bdev_lvol.a 00:03:11.629 CC module/bdev/nvme/nvme_rpc.o 00:03:11.629 LIB libspdk_bdev_null.a 00:03:11.629 SO libspdk_bdev_lvol.so.6.0 00:03:11.888 SO libspdk_bdev_null.so.6.0 00:03:11.888 LIB libspdk_bdev_passthru.a 00:03:11.888 CC module/bdev/split/vbdev_split_rpc.o 00:03:11.888 SYMLINK libspdk_bdev_lvol.so 00:03:11.888 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.888 SO libspdk_bdev_passthru.so.6.0 00:03:11.888 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.888 SYMLINK libspdk_bdev_null.so 00:03:11.888 CC module/bdev/nvme/vbdev_opal.o 00:03:11.888 SYMLINK libspdk_bdev_passthru.so 00:03:11.888 LIB libspdk_bdev_split.a 00:03:12.147 LIB libspdk_bdev_malloc.a 00:03:12.147 SO libspdk_bdev_split.so.6.0 00:03:12.147 SO libspdk_bdev_malloc.so.6.0 00:03:12.147 CC module/bdev/xnvme/bdev_xnvme.o 00:03:12.147 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.147 SYMLINK libspdk_bdev_split.so 00:03:12.147 SYMLINK libspdk_bdev_malloc.so 00:03:12.147 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.147 CC module/bdev/aio/bdev_aio.o 00:03:12.147 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.147 CC module/bdev/ftl/bdev_ftl.o 00:03:12.147 LIB libspdk_bdev_zone_block.a 00:03:12.147 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.405 SO libspdk_bdev_zone_block.so.6.0 00:03:12.405 SYMLINK libspdk_bdev_zone_block.so 00:03:12.405 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.405 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:12.405 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.405 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.405 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.664 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.664 LIB libspdk_bdev_xnvme.a 00:03:12.664 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.664 SO libspdk_bdev_xnvme.so.3.0 00:03:12.664 LIB libspdk_bdev_ftl.a 00:03:12.664 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.664 SO libspdk_bdev_ftl.so.6.0 00:03:12.664 SYMLINK libspdk_bdev_xnvme.so 00:03:12.664 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.664 LIB libspdk_bdev_iscsi.a 00:03:12.664 CC module/bdev/raid/raid0.o 00:03:12.664 CC module/bdev/raid/raid1.o 00:03:12.664 SO libspdk_bdev_iscsi.so.6.0 00:03:12.664 LIB libspdk_bdev_aio.a 00:03:12.664 SYMLINK libspdk_bdev_ftl.so 00:03:12.923 SO libspdk_bdev_aio.so.6.0 00:03:12.923 CC module/bdev/raid/concat.o 00:03:12.923 SYMLINK libspdk_bdev_aio.so 00:03:12.923 SYMLINK libspdk_bdev_iscsi.so 00:03:13.182 LIB libspdk_bdev_raid.a 00:03:13.182 SO libspdk_bdev_raid.so.6.0 00:03:13.182 LIB libspdk_bdev_virtio.a 00:03:13.182 SYMLINK libspdk_bdev_raid.so 00:03:13.182 SO libspdk_bdev_virtio.so.6.0 00:03:13.441 SYMLINK libspdk_bdev_virtio.so 00:03:14.377 LIB libspdk_bdev_nvme.a 00:03:14.653 SO libspdk_bdev_nvme.so.7.1 00:03:14.653 SYMLINK libspdk_bdev_nvme.so 00:03:15.233 CC module/event/subsystems/vmd/vmd.o 00:03:15.233 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.233 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.233 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.233 CC module/event/subsystems/sock/sock.o 00:03:15.233 CC module/event/subsystems/keyring/keyring.o 00:03:15.233 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.233 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.233 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.233 LIB libspdk_event_scheduler.a 00:03:15.233 LIB libspdk_event_vhost_blk.a 00:03:15.234 LIB libspdk_event_keyring.a 00:03:15.234 LIB libspdk_event_vmd.a 00:03:15.234 LIB libspdk_event_fsdev.a 00:03:15.234 LIB libspdk_event_sock.a 00:03:15.234 SO libspdk_event_scheduler.so.4.0 00:03:15.234 SO libspdk_event_keyring.so.1.0 00:03:15.234 SO libspdk_event_vhost_blk.so.3.0 00:03:15.234 LIB libspdk_event_iobuf.a 00:03:15.234 SO libspdk_event_fsdev.so.1.0 00:03:15.234 SO libspdk_event_vmd.so.6.0 00:03:15.234 SO libspdk_event_sock.so.5.0 00:03:15.492 SO libspdk_event_iobuf.so.3.0 00:03:15.492 SYMLINK libspdk_event_keyring.so 00:03:15.492 SYMLINK libspdk_event_vhost_blk.so 00:03:15.492 SYMLINK libspdk_event_scheduler.so 00:03:15.492 SYMLINK libspdk_event_sock.so 00:03:15.492 SYMLINK libspdk_event_fsdev.so 00:03:15.492 SYMLINK libspdk_event_vmd.so 00:03:15.492 SYMLINK libspdk_event_iobuf.so 00:03:15.751 CC module/event/subsystems/accel/accel.o 00:03:15.751 LIB libspdk_event_accel.a 00:03:16.010 SO libspdk_event_accel.so.6.0 00:03:16.010 SYMLINK libspdk_event_accel.so 00:03:16.270 CC module/event/subsystems/bdev/bdev.o 00:03:16.529 LIB libspdk_event_bdev.a 00:03:16.529 SO libspdk_event_bdev.so.6.0 00:03:16.529 SYMLINK libspdk_event_bdev.so 00:03:16.788 CC module/event/subsystems/scsi/scsi.o 00:03:16.788 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.788 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.788 CC module/event/subsystems/nbd/nbd.o 00:03:16.788 CC module/event/subsystems/ublk/ublk.o 00:03:16.788 LIB libspdk_event_nbd.a 00:03:17.047 SO libspdk_event_nbd.so.6.0 00:03:17.047 LIB libspdk_event_scsi.a 00:03:17.047 LIB libspdk_event_ublk.a 00:03:17.047 LIB libspdk_event_nvmf.a 00:03:17.047 SYMLINK libspdk_event_nbd.so 00:03:17.047 SO libspdk_event_ublk.so.3.0 00:03:17.047 SO libspdk_event_scsi.so.6.0 00:03:17.047 SO libspdk_event_nvmf.so.6.0 00:03:17.047 SYMLINK libspdk_event_ublk.so 00:03:17.047 SYMLINK libspdk_event_scsi.so 00:03:17.047 SYMLINK libspdk_event_nvmf.so 00:03:17.306 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.306 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.564 LIB libspdk_event_vhost_scsi.a 00:03:17.564 LIB libspdk_event_iscsi.a 00:03:17.564 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.564 SO libspdk_event_iscsi.so.6.0 00:03:17.564 SYMLINK libspdk_event_vhost_scsi.so 00:03:17.564 SYMLINK libspdk_event_iscsi.so 00:03:17.823 SO libspdk.so.6.0 00:03:17.823 SYMLINK libspdk.so 00:03:18.082 CC app/trace_record/trace_record.o 00:03:18.082 CC app/spdk_nvme_perf/perf.o 00:03:18.082 CXX app/trace/trace.o 00:03:18.082 CC app/spdk_lspci/spdk_lspci.o 00:03:18.082 CC app/spdk_nvme_identify/identify.o 00:03:18.082 CC app/nvmf_tgt/nvmf_main.o 00:03:18.082 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.082 CC app/spdk_tgt/spdk_tgt.o 00:03:18.082 CC examples/util/zipf/zipf.o 00:03:18.082 CC test/thread/poller_perf/poller_perf.o 00:03:18.082 LINK spdk_lspci 00:03:18.353 LINK nvmf_tgt 00:03:18.353 LINK poller_perf 00:03:18.353 LINK iscsi_tgt 00:03:18.353 LINK zipf 00:03:18.353 LINK spdk_trace_record 00:03:18.353 LINK spdk_tgt 00:03:18.353 CC app/spdk_nvme_discover/discovery_aer.o 00:03:18.614 LINK spdk_trace 00:03:18.614 CC app/spdk_top/spdk_top.o 00:03:18.614 CC test/dma/test_dma/test_dma.o 00:03:18.614 LINK spdk_nvme_discover 00:03:18.614 CC examples/ioat/perf/perf.o 00:03:18.614 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.614 CC examples/idxd/perf/perf.o 00:03:18.615 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.872 CC app/spdk_dd/spdk_dd.o 00:03:18.872 LINK lsvmd 00:03:18.872 CC examples/ioat/verify/verify.o 00:03:18.872 LINK interrupt_tgt 00:03:18.872 LINK ioat_perf 00:03:19.131 CC examples/vmd/led/led.o 00:03:19.131 LINK idxd_perf 00:03:19.131 LINK spdk_nvme_perf 00:03:19.131 LINK spdk_nvme_identify 00:03:19.131 LINK verify 00:03:19.131 LINK test_dma 00:03:19.390 LINK led 00:03:19.390 CC examples/sock/hello_world/hello_sock.o 00:03:19.391 LINK spdk_dd 00:03:19.391 CC examples/thread/thread/thread_ex.o 00:03:19.391 CC app/vhost/vhost.o 00:03:19.391 TEST_HEADER include/spdk/accel.h 00:03:19.391 TEST_HEADER include/spdk/accel_module.h 00:03:19.391 TEST_HEADER include/spdk/assert.h 00:03:19.391 TEST_HEADER include/spdk/barrier.h 00:03:19.391 TEST_HEADER include/spdk/base64.h 00:03:19.391 TEST_HEADER include/spdk/bdev.h 00:03:19.391 TEST_HEADER include/spdk/bdev_module.h 00:03:19.391 CC app/fio/nvme/fio_plugin.o 00:03:19.391 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.391 TEST_HEADER include/spdk/bit_array.h 00:03:19.391 TEST_HEADER include/spdk/bit_pool.h 00:03:19.391 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.391 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.391 TEST_HEADER include/spdk/blobfs.h 00:03:19.391 TEST_HEADER include/spdk/blob.h 00:03:19.391 TEST_HEADER include/spdk/conf.h 00:03:19.391 TEST_HEADER include/spdk/config.h 00:03:19.391 TEST_HEADER include/spdk/cpuset.h 00:03:19.391 TEST_HEADER include/spdk/crc16.h 00:03:19.391 TEST_HEADER include/spdk/crc32.h 00:03:19.391 TEST_HEADER include/spdk/crc64.h 00:03:19.391 TEST_HEADER include/spdk/dif.h 00:03:19.391 TEST_HEADER include/spdk/dma.h 00:03:19.391 TEST_HEADER include/spdk/endian.h 00:03:19.391 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.391 TEST_HEADER include/spdk/env.h 00:03:19.391 TEST_HEADER include/spdk/event.h 00:03:19.391 TEST_HEADER include/spdk/fd_group.h 00:03:19.391 TEST_HEADER include/spdk/fd.h 00:03:19.391 TEST_HEADER include/spdk/file.h 00:03:19.391 TEST_HEADER include/spdk/fsdev.h 00:03:19.391 TEST_HEADER include/spdk/fsdev_module.h 00:03:19.391 TEST_HEADER include/spdk/ftl.h 00:03:19.391 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:19.391 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.391 TEST_HEADER include/spdk/hexlify.h 00:03:19.391 TEST_HEADER include/spdk/histogram_data.h 00:03:19.391 TEST_HEADER include/spdk/idxd.h 00:03:19.391 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.391 TEST_HEADER include/spdk/init.h 00:03:19.391 TEST_HEADER include/spdk/ioat.h 00:03:19.391 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.391 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.649 TEST_HEADER include/spdk/json.h 00:03:19.649 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.649 TEST_HEADER include/spdk/keyring.h 00:03:19.649 CC test/app/bdev_svc/bdev_svc.o 00:03:19.649 TEST_HEADER include/spdk/keyring_module.h 00:03:19.649 TEST_HEADER include/spdk/likely.h 00:03:19.649 TEST_HEADER include/spdk/log.h 00:03:19.649 TEST_HEADER include/spdk/lvol.h 00:03:19.649 TEST_HEADER include/spdk/md5.h 00:03:19.649 TEST_HEADER include/spdk/memory.h 00:03:19.649 TEST_HEADER include/spdk/mmio.h 00:03:19.649 TEST_HEADER include/spdk/nbd.h 00:03:19.649 TEST_HEADER include/spdk/net.h 00:03:19.649 TEST_HEADER include/spdk/notify.h 00:03:19.650 TEST_HEADER include/spdk/nvme.h 00:03:19.650 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.650 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.650 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.650 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.650 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.650 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.650 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.650 TEST_HEADER include/spdk/nvmf.h 00:03:19.650 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.650 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.650 TEST_HEADER include/spdk/opal.h 00:03:19.650 TEST_HEADER include/spdk/opal_spec.h 00:03:19.650 LINK thread 00:03:19.650 TEST_HEADER include/spdk/pci_ids.h 00:03:19.650 TEST_HEADER include/spdk/pipe.h 00:03:19.650 TEST_HEADER include/spdk/queue.h 00:03:19.650 CC test/app/histogram_perf/histogram_perf.o 00:03:19.650 TEST_HEADER include/spdk/reduce.h 00:03:19.650 TEST_HEADER include/spdk/rpc.h 00:03:19.650 TEST_HEADER include/spdk/scheduler.h 00:03:19.650 TEST_HEADER include/spdk/scsi.h 00:03:19.650 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.650 TEST_HEADER include/spdk/sock.h 00:03:19.650 TEST_HEADER include/spdk/stdinc.h 00:03:19.650 LINK hello_sock 00:03:19.650 TEST_HEADER include/spdk/string.h 00:03:19.650 TEST_HEADER include/spdk/thread.h 00:03:19.650 TEST_HEADER include/spdk/trace.h 00:03:19.650 TEST_HEADER include/spdk/trace_parser.h 00:03:19.650 TEST_HEADER include/spdk/tree.h 00:03:19.650 TEST_HEADER include/spdk/ublk.h 00:03:19.650 TEST_HEADER include/spdk/util.h 00:03:19.650 TEST_HEADER include/spdk/uuid.h 00:03:19.650 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:19.650 TEST_HEADER include/spdk/version.h 00:03:19.650 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.650 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.650 TEST_HEADER include/spdk/vhost.h 00:03:19.650 LINK vhost 00:03:19.650 TEST_HEADER include/spdk/vmd.h 00:03:19.650 TEST_HEADER include/spdk/xor.h 00:03:19.650 TEST_HEADER include/spdk/zipf.h 00:03:19.650 CXX test/cpp_headers/accel.o 00:03:19.650 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.650 LINK bdev_svc 00:03:19.650 LINK histogram_perf 00:03:19.909 LINK spdk_top 00:03:19.909 CXX test/cpp_headers/accel_module.o 00:03:19.909 CC test/app/jsoncat/jsoncat.o 00:03:19.909 CC examples/nvme/hello_world/hello_world.o 00:03:19.909 CC examples/nvme/reconnect/reconnect.o 00:03:19.909 LINK jsoncat 00:03:19.909 CXX test/cpp_headers/assert.o 00:03:19.909 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.168 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.168 CC app/fio/bdev/fio_plugin.o 00:03:20.168 LINK nvme_fuzz 00:03:20.168 LINK spdk_nvme 00:03:20.168 CXX test/cpp_headers/barrier.o 00:03:20.168 LINK hello_world 00:03:20.168 CC examples/nvme/arbitration/arbitration.o 00:03:20.426 LINK mem_callbacks 00:03:20.426 CXX test/cpp_headers/base64.o 00:03:20.426 LINK hello_fsdev 00:03:20.426 LINK reconnect 00:03:20.426 CC examples/nvme/hotplug/hotplug.o 00:03:20.426 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.426 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.685 CC test/env/vtophys/vtophys.o 00:03:20.685 CXX test/cpp_headers/bdev.o 00:03:20.685 CXX test/cpp_headers/bdev_module.o 00:03:20.685 CXX test/cpp_headers/bdev_zone.o 00:03:20.685 LINK nvme_manage 00:03:20.685 LINK spdk_bdev 00:03:20.685 LINK arbitration 00:03:20.685 LINK hotplug 00:03:20.685 LINK cmb_copy 00:03:20.685 LINK vtophys 00:03:20.943 CXX test/cpp_headers/bit_array.o 00:03:20.943 CXX test/cpp_headers/bit_pool.o 00:03:20.944 CC examples/nvme/abort/abort.o 00:03:20.944 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.944 CC test/app/stub/stub.o 00:03:20.944 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.944 CC examples/accel/perf/accel_perf.o 00:03:20.944 CC examples/blob/hello_world/hello_blob.o 00:03:20.944 CXX test/cpp_headers/blob_bdev.o 00:03:21.203 CC test/event/event_perf/event_perf.o 00:03:21.203 CC test/event/reactor/reactor.o 00:03:21.203 LINK env_dpdk_post_init 00:03:21.203 LINK stub 00:03:21.203 LINK pmr_persistence 00:03:21.203 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.203 LINK event_perf 00:03:21.203 LINK reactor 00:03:21.203 LINK hello_blob 00:03:21.462 LINK abort 00:03:21.462 CC test/env/memory/memory_ut.o 00:03:21.462 CXX test/cpp_headers/blobfs.o 00:03:21.462 CC test/event/reactor_perf/reactor_perf.o 00:03:21.462 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.462 CC examples/blob/cli/blobcli.o 00:03:21.462 CC test/event/app_repeat/app_repeat.o 00:03:21.720 CXX test/cpp_headers/blob.o 00:03:21.720 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.720 LINK reactor_perf 00:03:21.720 LINK accel_perf 00:03:21.720 CC test/rpc_client/rpc_client_test.o 00:03:21.720 CC test/nvme/aer/aer.o 00:03:21.720 LINK app_repeat 00:03:21.720 CXX test/cpp_headers/conf.o 00:03:21.980 LINK rpc_client_test 00:03:21.980 CC test/event/scheduler/scheduler.o 00:03:21.980 CXX test/cpp_headers/config.o 00:03:21.980 LINK aer 00:03:21.980 CC test/accel/dif/dif.o 00:03:21.980 CXX test/cpp_headers/cpuset.o 00:03:21.980 CC test/blobfs/mkfs/mkfs.o 00:03:22.239 LINK blobcli 00:03:22.239 LINK vhost_fuzz 00:03:22.239 LINK scheduler 00:03:22.239 CXX test/cpp_headers/crc16.o 00:03:22.239 CC test/nvme/reset/reset.o 00:03:22.239 LINK mkfs 00:03:22.239 CC test/lvol/esnap/esnap.o 00:03:22.498 CC test/env/pci/pci_ut.o 00:03:22.498 CXX test/cpp_headers/crc32.o 00:03:22.498 CC test/nvme/sgl/sgl.o 00:03:22.498 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.498 CC test/nvme/e2edp/nvme_dp.o 00:03:22.498 CXX test/cpp_headers/crc64.o 00:03:22.498 LINK reset 00:03:22.757 LINK iscsi_fuzz 00:03:22.757 CXX test/cpp_headers/dif.o 00:03:22.757 LINK memory_ut 00:03:22.757 LINK sgl 00:03:22.757 LINK hello_bdev 00:03:22.757 CC test/nvme/overhead/overhead.o 00:03:22.757 CXX test/cpp_headers/dma.o 00:03:23.016 LINK pci_ut 00:03:23.016 LINK nvme_dp 00:03:23.016 LINK dif 00:03:23.016 CXX test/cpp_headers/endian.o 00:03:23.016 CC test/nvme/err_injection/err_injection.o 00:03:23.016 CXX test/cpp_headers/env_dpdk.o 00:03:23.016 CC test/nvme/startup/startup.o 00:03:23.016 CXX test/cpp_headers/env.o 00:03:23.275 CXX test/cpp_headers/event.o 00:03:23.275 CC examples/bdev/bdevperf/bdevperf.o 00:03:23.275 LINK overhead 00:03:23.275 CXX test/cpp_headers/fd_group.o 00:03:23.275 LINK err_injection 00:03:23.275 CXX test/cpp_headers/fd.o 00:03:23.275 LINK startup 00:03:23.275 CC test/nvme/reserve/reserve.o 00:03:23.275 CXX test/cpp_headers/file.o 00:03:23.275 CXX test/cpp_headers/fsdev.o 00:03:23.275 CC test/bdev/bdevio/bdevio.o 00:03:23.534 CXX test/cpp_headers/fsdev_module.o 00:03:23.534 CXX test/cpp_headers/ftl.o 00:03:23.534 CC test/nvme/simple_copy/simple_copy.o 00:03:23.534 CXX test/cpp_headers/fuse_dispatcher.o 00:03:23.534 CXX test/cpp_headers/gpt_spec.o 00:03:23.534 LINK reserve 00:03:23.534 CC test/nvme/connect_stress/connect_stress.o 00:03:23.534 CXX test/cpp_headers/hexlify.o 00:03:23.793 CXX test/cpp_headers/histogram_data.o 00:03:23.793 CXX test/cpp_headers/idxd.o 00:03:23.793 LINK connect_stress 00:03:23.793 CC test/nvme/boot_partition/boot_partition.o 00:03:23.793 CC test/nvme/compliance/nvme_compliance.o 00:03:23.793 LINK simple_copy 00:03:23.793 CXX test/cpp_headers/idxd_spec.o 00:03:23.793 LINK bdevio 00:03:23.793 CC test/nvme/fused_ordering/fused_ordering.o 00:03:24.051 LINK boot_partition 00:03:24.051 CC test/nvme/fdp/fdp.o 00:03:24.051 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:24.051 CXX test/cpp_headers/init.o 00:03:24.051 CXX test/cpp_headers/ioat.o 00:03:24.051 CC test/nvme/cuse/cuse.o 00:03:24.051 LINK fused_ordering 00:03:24.323 CXX test/cpp_headers/ioat_spec.o 00:03:24.323 CXX test/cpp_headers/iscsi_spec.o 00:03:24.323 LINK nvme_compliance 00:03:24.323 LINK bdevperf 00:03:24.323 CXX test/cpp_headers/json.o 00:03:24.323 LINK doorbell_aers 00:03:24.323 CXX test/cpp_headers/jsonrpc.o 00:03:24.323 CXX test/cpp_headers/keyring.o 00:03:24.323 CXX test/cpp_headers/keyring_module.o 00:03:24.323 CXX test/cpp_headers/likely.o 00:03:24.583 CXX test/cpp_headers/log.o 00:03:24.583 CXX test/cpp_headers/lvol.o 00:03:24.583 LINK fdp 00:03:24.583 CXX test/cpp_headers/md5.o 00:03:24.583 CXX test/cpp_headers/memory.o 00:03:24.583 CXX test/cpp_headers/mmio.o 00:03:24.583 CXX test/cpp_headers/nbd.o 00:03:24.583 CXX test/cpp_headers/net.o 00:03:24.583 CXX test/cpp_headers/notify.o 00:03:24.583 CXX test/cpp_headers/nvme.o 00:03:24.583 CXX test/cpp_headers/nvme_intel.o 00:03:24.583 CC examples/nvmf/nvmf/nvmf.o 00:03:24.841 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.841 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:24.841 CXX test/cpp_headers/nvme_spec.o 00:03:24.841 CXX test/cpp_headers/nvme_zns.o 00:03:24.841 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.841 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.841 CXX test/cpp_headers/nvmf.o 00:03:24.841 CXX test/cpp_headers/nvmf_spec.o 00:03:25.100 CXX test/cpp_headers/nvmf_transport.o 00:03:25.100 CXX test/cpp_headers/opal.o 00:03:25.100 LINK nvmf 00:03:25.100 CXX test/cpp_headers/opal_spec.o 00:03:25.100 CXX test/cpp_headers/pci_ids.o 00:03:25.100 CXX test/cpp_headers/pipe.o 00:03:25.100 CXX test/cpp_headers/queue.o 00:03:25.100 CXX test/cpp_headers/reduce.o 00:03:25.100 CXX test/cpp_headers/rpc.o 00:03:25.100 CXX test/cpp_headers/scheduler.o 00:03:25.100 CXX test/cpp_headers/scsi.o 00:03:25.358 CXX test/cpp_headers/scsi_spec.o 00:03:25.358 CXX test/cpp_headers/sock.o 00:03:25.358 CXX test/cpp_headers/stdinc.o 00:03:25.358 CXX test/cpp_headers/string.o 00:03:25.358 CXX test/cpp_headers/thread.o 00:03:25.358 CXX test/cpp_headers/trace.o 00:03:25.358 CXX test/cpp_headers/trace_parser.o 00:03:25.358 CXX test/cpp_headers/tree.o 00:03:25.358 CXX test/cpp_headers/ublk.o 00:03:25.358 CXX test/cpp_headers/util.o 00:03:25.358 CXX test/cpp_headers/uuid.o 00:03:25.358 CXX test/cpp_headers/version.o 00:03:25.358 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.358 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.358 CXX test/cpp_headers/vhost.o 00:03:25.617 CXX test/cpp_headers/vmd.o 00:03:25.617 CXX test/cpp_headers/xor.o 00:03:25.617 CXX test/cpp_headers/zipf.o 00:03:25.876 LINK cuse 00:03:29.165 LINK esnap 00:03:29.165 00:03:29.165 real 1m31.818s 00:03:29.165 user 8m51.516s 00:03:29.165 sys 1m34.255s 00:03:29.165 08:03:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:29.165 ************************************ 00:03:29.165 END TEST make 00:03:29.165 08:03:33 make -- common/autotest_common.sh@10 -- $ set +x 00:03:29.165 ************************************ 00:03:29.165 08:03:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:29.165 08:03:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:29.165 08:03:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:29.165 08:03:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.165 08:03:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:29.165 08:03:33 -- pm/common@44 -- $ pid=5341 00:03:29.165 08:03:33 -- pm/common@50 -- $ kill -TERM 5341 00:03:29.165 08:03:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.165 08:03:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:29.165 08:03:33 -- pm/common@44 -- $ pid=5342 00:03:29.165 08:03:33 -- pm/common@50 -- $ kill -TERM 5342 00:03:29.165 08:03:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:29.165 08:03:33 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:29.165 08:03:33 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:29.165 08:03:33 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:29.165 08:03:33 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:29.165 08:03:34 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:29.165 08:03:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.165 08:03:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.165 08:03:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.166 08:03:34 -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.166 08:03:34 -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.166 08:03:34 -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.166 08:03:34 -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.166 08:03:34 -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.166 08:03:34 -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.166 08:03:34 -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.166 08:03:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.166 08:03:34 -- scripts/common.sh@344 -- # case "$op" in 00:03:29.166 08:03:34 -- scripts/common.sh@345 -- # : 1 00:03:29.166 08:03:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.166 08:03:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.166 08:03:34 -- scripts/common.sh@365 -- # decimal 1 00:03:29.166 08:03:34 -- scripts/common.sh@353 -- # local d=1 00:03:29.166 08:03:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.166 08:03:34 -- scripts/common.sh@355 -- # echo 1 00:03:29.166 08:03:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.166 08:03:34 -- scripts/common.sh@366 -- # decimal 2 00:03:29.166 08:03:34 -- scripts/common.sh@353 -- # local d=2 00:03:29.166 08:03:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.166 08:03:34 -- scripts/common.sh@355 -- # echo 2 00:03:29.166 08:03:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.166 08:03:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.166 08:03:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.166 08:03:34 -- scripts/common.sh@368 -- # return 0 00:03:29.166 08:03:34 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.166 08:03:34 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:29.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.166 --rc genhtml_branch_coverage=1 00:03:29.166 --rc genhtml_function_coverage=1 00:03:29.166 --rc genhtml_legend=1 00:03:29.166 --rc geninfo_all_blocks=1 00:03:29.166 --rc geninfo_unexecuted_blocks=1 00:03:29.166 00:03:29.166 ' 00:03:29.166 08:03:34 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:29.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.166 --rc genhtml_branch_coverage=1 00:03:29.166 --rc genhtml_function_coverage=1 00:03:29.166 --rc genhtml_legend=1 00:03:29.166 --rc geninfo_all_blocks=1 00:03:29.166 --rc geninfo_unexecuted_blocks=1 00:03:29.166 00:03:29.166 ' 00:03:29.166 08:03:34 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:29.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.166 --rc genhtml_branch_coverage=1 00:03:29.166 --rc genhtml_function_coverage=1 00:03:29.166 --rc genhtml_legend=1 00:03:29.166 --rc geninfo_all_blocks=1 00:03:29.166 --rc geninfo_unexecuted_blocks=1 00:03:29.166 00:03:29.166 ' 00:03:29.166 08:03:34 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:29.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.166 --rc genhtml_branch_coverage=1 00:03:29.166 --rc genhtml_function_coverage=1 00:03:29.166 --rc genhtml_legend=1 00:03:29.166 --rc geninfo_all_blocks=1 00:03:29.166 --rc geninfo_unexecuted_blocks=1 00:03:29.166 00:03:29.166 ' 00:03:29.166 08:03:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:29.166 08:03:34 -- nvmf/common.sh@7 -- # uname -s 00:03:29.166 08:03:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:29.166 08:03:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:29.166 08:03:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:29.166 08:03:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:29.166 08:03:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:29.166 08:03:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:29.166 08:03:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:29.166 08:03:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:29.166 08:03:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:29.166 08:03:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:29.166 08:03:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f728c72-5b6a-4803-83fc-68787cb19dfd 00:03:29.166 08:03:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=8f728c72-5b6a-4803-83fc-68787cb19dfd 00:03:29.166 08:03:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:29.166 08:03:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:29.166 08:03:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:29.166 08:03:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:29.166 08:03:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:29.166 08:03:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:29.166 08:03:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:29.166 08:03:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:29.166 08:03:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:29.166 08:03:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.166 08:03:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.166 08:03:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.166 08:03:34 -- paths/export.sh@5 -- # export PATH 00:03:29.166 08:03:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.166 08:03:34 -- nvmf/common.sh@51 -- # : 0 00:03:29.166 08:03:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:29.166 08:03:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:29.166 08:03:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:29.166 08:03:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:29.166 08:03:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:29.166 08:03:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:29.166 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:29.166 08:03:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:29.166 08:03:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:29.166 08:03:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:29.166 08:03:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:29.166 08:03:34 -- spdk/autotest.sh@32 -- # uname -s 00:03:29.166 08:03:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:29.166 08:03:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:29.166 08:03:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.166 08:03:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:29.166 08:03:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.166 08:03:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:29.166 08:03:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:29.166 08:03:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:29.166 08:03:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:29.166 08:03:34 -- spdk/autotest.sh@48 -- # udevadm_pid=54859 00:03:29.166 08:03:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:29.166 08:03:34 -- pm/common@17 -- # local monitor 00:03:29.166 08:03:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.166 08:03:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.166 08:03:34 -- pm/common@25 -- # sleep 1 00:03:29.166 08:03:34 -- pm/common@21 -- # date +%s 00:03:29.166 08:03:34 -- pm/common@21 -- # date +%s 00:03:29.166 08:03:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731830614 00:03:29.166 08:03:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731830614 00:03:29.166 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731830614_collect-cpu-load.pm.log 00:03:29.166 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731830614_collect-vmstat.pm.log 00:03:30.544 08:03:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:30.544 08:03:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:30.544 08:03:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.544 08:03:35 -- common/autotest_common.sh@10 -- # set +x 00:03:30.544 08:03:35 -- spdk/autotest.sh@59 -- # create_test_list 00:03:30.544 08:03:35 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:30.544 08:03:35 -- common/autotest_common.sh@10 -- # set +x 00:03:30.544 08:03:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:30.544 08:03:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:30.544 08:03:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:30.544 08:03:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:30.544 08:03:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:30.544 08:03:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:30.544 08:03:35 -- common/autotest_common.sh@1457 -- # uname 00:03:30.544 08:03:35 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:30.544 08:03:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:30.544 08:03:35 -- common/autotest_common.sh@1477 -- # uname 00:03:30.545 08:03:35 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:30.545 08:03:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:30.545 08:03:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:30.545 lcov: LCOV version 1.15 00:03:30.545 08:03:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.427 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:45.427 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:57.635 08:04:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:57.635 08:04:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.635 08:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:57.635 08:04:01 -- spdk/autotest.sh@78 -- # rm -f 00:03:57.635 08:04:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.894 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:57.894 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:57.894 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:57.894 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:58.154 08:04:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:58.154 08:04:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:58.154 08:04:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:58.154 08:04:02 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:58.154 08:04:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.154 08:04:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.154 08:04:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.154 08:04:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:58.154 08:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:58.154 08:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.154 08:04:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:58.154 08:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:58.154 08:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.154 08:04:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.154 08:04:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:58.154 08:04:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:03:58.154 08:04:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:58.154 08:04:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:58.154 08:04:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:58.154 08:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.154 08:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.154 08:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:58.154 08:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:58.154 08:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:58.154 No valid GPT data, bailing 00:03:58.154 08:04:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.154 08:04:02 -- scripts/common.sh@394 -- # pt= 00:03:58.154 08:04:02 -- scripts/common.sh@395 -- # return 1 00:03:58.154 08:04:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:58.154 1+0 records in 00:03:58.154 1+0 records out 00:03:58.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403988 s, 260 MB/s 00:03:58.154 08:04:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.154 08:04:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.154 08:04:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:58.154 08:04:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:58.154 08:04:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:58.154 No valid GPT data, bailing 00:03:58.154 08:04:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:58.154 08:04:03 -- scripts/common.sh@394 -- # pt= 00:03:58.154 08:04:03 -- scripts/common.sh@395 -- # return 1 00:03:58.154 08:04:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:58.154 1+0 records in 00:03:58.154 1+0 records out 00:03:58.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461763 s, 227 MB/s 00:03:58.154 08:04:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.154 08:04:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.154 08:04:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:58.154 08:04:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:58.154 08:04:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:58.154 No valid GPT data, bailing 00:03:58.154 08:04:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:58.154 08:04:03 -- scripts/common.sh@394 -- # pt= 00:03:58.154 08:04:03 -- scripts/common.sh@395 -- # return 1 00:03:58.154 08:04:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:58.154 1+0 records in 00:03:58.154 1+0 records out 00:03:58.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00437197 s, 240 MB/s 00:03:58.154 08:04:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.154 08:04:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.154 08:04:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:58.154 08:04:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:58.154 08:04:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:58.414 No valid GPT data, bailing 00:03:58.414 08:04:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:58.414 08:04:03 -- scripts/common.sh@394 -- # pt= 00:03:58.414 08:04:03 -- scripts/common.sh@395 -- # return 1 00:03:58.414 08:04:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:58.414 1+0 records in 00:03:58.414 1+0 records out 00:03:58.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434199 s, 241 MB/s 00:03:58.414 08:04:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.414 08:04:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.414 08:04:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:58.414 08:04:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:58.414 08:04:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:58.414 No valid GPT data, bailing 00:03:58.414 08:04:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:58.414 08:04:03 -- scripts/common.sh@394 -- # pt= 00:03:58.414 08:04:03 -- scripts/common.sh@395 -- # return 1 00:03:58.414 08:04:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:58.414 1+0 records in 00:03:58.414 1+0 records out 00:03:58.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156153 s, 67.2 MB/s 00:03:58.414 08:04:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:58.414 08:04:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:58.414 08:04:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:58.414 08:04:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:58.414 08:04:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:58.414 No valid GPT data, bailing 00:03:58.414 08:04:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:58.414 08:04:03 -- scripts/common.sh@394 -- # pt= 00:03:58.414 08:04:03 -- scripts/common.sh@395 -- # return 1 00:03:58.414 08:04:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:58.414 1+0 records in 00:03:58.414 1+0 records out 00:03:58.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446712 s, 235 MB/s 00:03:58.414 08:04:03 -- spdk/autotest.sh@105 -- # sync 00:03:58.676 08:04:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:58.676 08:04:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:58.676 08:04:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:00.584 08:04:05 -- spdk/autotest.sh@111 -- # uname -s 00:04:00.584 08:04:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:00.584 08:04:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:00.584 08:04:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.721 Hugepages 00:04:01.721 node hugesize free / total 00:04:01.721 node0 1048576kB 0 / 0 00:04:01.721 node0 2048kB 0 / 0 00:04:01.721 00:04:01.721 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.721 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:01.980 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:01.980 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:01.980 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:01.980 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:02.239 08:04:06 -- spdk/autotest.sh@117 -- # uname -s 00:04:02.239 08:04:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:02.239 08:04:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:02.239 08:04:06 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.375 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.375 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.375 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.375 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.375 08:04:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:04.312 08:04:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:04.312 08:04:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:04.312 08:04:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.312 08:04:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:04.312 08:04:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:04.312 08:04:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:04.312 08:04:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.312 08:04:09 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.312 08:04:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:04.571 08:04:09 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:04.571 08:04:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:04.571 08:04:09 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.088 Waiting for block devices as requested 00:04:05.088 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.088 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.349 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.349 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:10.689 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:10.689 08:04:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:10.689 08:04:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:10.689 08:04:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:10.689 08:04:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1543 -- # continue 00:04:10.689 08:04:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:10.689 08:04:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1543 -- # continue 00:04:10.689 08:04:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:10.689 08:04:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1543 -- # continue 00:04:10.689 08:04:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:10.689 08:04:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:10.689 08:04:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:10.689 08:04:15 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:10.689 08:04:15 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:10.689 08:04:15 -- common/autotest_common.sh@1543 -- # continue 00:04:10.689 08:04:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:10.689 08:04:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.690 08:04:15 -- common/autotest_common.sh@10 -- # set +x 00:04:10.690 08:04:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:10.690 08:04:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.690 08:04:15 -- common/autotest_common.sh@10 -- # set +x 00:04:10.690 08:04:15 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.824 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.824 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.824 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.824 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.083 08:04:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:12.083 08:04:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.083 08:04:16 -- common/autotest_common.sh@10 -- # set +x 00:04:12.083 08:04:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:12.083 08:04:16 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:12.083 08:04:16 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.083 08:04:16 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:12.083 08:04:16 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:12.083 08:04:16 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:12.083 08:04:16 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:12.083 08:04:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:12.083 08:04:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:12.083 08:04:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:12.083 08:04:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.083 08:04:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:12.083 08:04:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:12.083 08:04:16 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:12.084 08:04:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:12.084 08:04:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:12.084 08:04:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.084 08:04:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:12.084 08:04:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.084 08:04:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:12.084 08:04:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.084 08:04:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:12.084 08:04:16 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:12.084 08:04:16 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.084 08:04:16 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:12.084 08:04:16 -- common/autotest_common.sh@1572 -- # return 0 00:04:12.084 08:04:16 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:12.084 08:04:16 -- common/autotest_common.sh@1580 -- # return 0 00:04:12.084 08:04:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:12.084 08:04:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:12.084 08:04:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.084 08:04:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:12.084 08:04:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:12.084 08:04:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.084 08:04:16 -- common/autotest_common.sh@10 -- # set +x 00:04:12.084 08:04:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:12.084 08:04:16 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.084 08:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.084 08:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.084 08:04:16 -- common/autotest_common.sh@10 -- # set +x 00:04:12.084 ************************************ 00:04:12.084 START TEST env 00:04:12.084 ************************************ 00:04:12.084 08:04:17 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.342 * Looking for test storage... 00:04:12.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:12.342 08:04:17 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.342 08:04:17 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.342 08:04:17 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:12.342 08:04:17 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:12.342 08:04:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.343 08:04:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.343 08:04:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.343 08:04:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.343 08:04:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.343 08:04:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.343 08:04:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.343 08:04:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.343 08:04:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.343 08:04:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.343 08:04:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.343 08:04:17 env -- scripts/common.sh@344 -- # case "$op" in 00:04:12.343 08:04:17 env -- scripts/common.sh@345 -- # : 1 00:04:12.343 08:04:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.343 08:04:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.343 08:04:17 env -- scripts/common.sh@365 -- # decimal 1 00:04:12.343 08:04:17 env -- scripts/common.sh@353 -- # local d=1 00:04:12.343 08:04:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.343 08:04:17 env -- scripts/common.sh@355 -- # echo 1 00:04:12.343 08:04:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.343 08:04:17 env -- scripts/common.sh@366 -- # decimal 2 00:04:12.343 08:04:17 env -- scripts/common.sh@353 -- # local d=2 00:04:12.343 08:04:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.343 08:04:17 env -- scripts/common.sh@355 -- # echo 2 00:04:12.343 08:04:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.343 08:04:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.343 08:04:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.343 08:04:17 env -- scripts/common.sh@368 -- # return 0 00:04:12.343 08:04:17 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.343 08:04:17 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.343 --rc genhtml_branch_coverage=1 00:04:12.343 --rc genhtml_function_coverage=1 00:04:12.343 --rc genhtml_legend=1 00:04:12.343 --rc geninfo_all_blocks=1 00:04:12.343 --rc geninfo_unexecuted_blocks=1 00:04:12.343 00:04:12.343 ' 00:04:12.343 08:04:17 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.343 --rc genhtml_branch_coverage=1 00:04:12.343 --rc genhtml_function_coverage=1 00:04:12.343 --rc genhtml_legend=1 00:04:12.343 --rc geninfo_all_blocks=1 00:04:12.343 --rc geninfo_unexecuted_blocks=1 00:04:12.343 00:04:12.343 ' 00:04:12.343 08:04:17 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.343 --rc genhtml_branch_coverage=1 00:04:12.343 --rc genhtml_function_coverage=1 00:04:12.343 --rc genhtml_legend=1 00:04:12.343 --rc geninfo_all_blocks=1 00:04:12.343 --rc geninfo_unexecuted_blocks=1 00:04:12.343 00:04:12.343 ' 00:04:12.343 08:04:17 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:12.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.343 --rc genhtml_branch_coverage=1 00:04:12.343 --rc genhtml_function_coverage=1 00:04:12.343 --rc genhtml_legend=1 00:04:12.343 --rc geninfo_all_blocks=1 00:04:12.343 --rc geninfo_unexecuted_blocks=1 00:04:12.343 00:04:12.343 ' 00:04:12.343 08:04:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.343 08:04:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.343 08:04:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.343 08:04:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.343 ************************************ 00:04:12.343 START TEST env_memory 00:04:12.343 ************************************ 00:04:12.343 08:04:17 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.343 00:04:12.343 00:04:12.343 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.343 http://cunit.sourceforge.net/ 00:04:12.343 00:04:12.343 00:04:12.343 Suite: memory 00:04:12.343 Test: alloc and free memory map ...[2024-11-17 08:04:17.299931] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:12.343 passed 00:04:12.602 Test: mem map translation ...[2024-11-17 08:04:17.360574] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:12.602 [2024-11-17 08:04:17.360792] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:12.602 [2024-11-17 08:04:17.360896] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:12.602 [2024-11-17 08:04:17.360929] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:12.602 passed 00:04:12.602 Test: mem map registration ...[2024-11-17 08:04:17.460423] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:12.602 [2024-11-17 08:04:17.460494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:12.602 passed 00:04:12.602 Test: mem map adjacent registrations ...passed 00:04:12.602 00:04:12.602 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.602 suites 1 1 n/a 0 0 00:04:12.602 tests 4 4 4 0 0 00:04:12.602 asserts 152 152 152 0 n/a 00:04:12.602 00:04:12.602 Elapsed time = 0.345 seconds 00:04:12.602 ************************************ 00:04:12.602 END TEST env_memory 00:04:12.602 ************************************ 00:04:12.602 00:04:12.602 real 0m0.388s 00:04:12.602 user 0m0.354s 00:04:12.602 sys 0m0.024s 00:04:12.602 08:04:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.602 08:04:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:12.861 08:04:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:12.861 08:04:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.861 08:04:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.861 08:04:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.861 ************************************ 00:04:12.861 START TEST env_vtophys 00:04:12.861 ************************************ 00:04:12.861 08:04:17 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:12.861 EAL: lib.eal log level changed from notice to debug 00:04:12.861 EAL: Detected lcore 0 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 1 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 2 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 3 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 4 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 5 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 6 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 7 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 8 as core 0 on socket 0 00:04:12.861 EAL: Detected lcore 9 as core 0 on socket 0 00:04:12.861 EAL: Maximum logical cores by configuration: 128 00:04:12.861 EAL: Detected CPU lcores: 10 00:04:12.861 EAL: Detected NUMA nodes: 1 00:04:12.861 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:12.861 EAL: Detected shared linkage of DPDK 00:04:12.861 EAL: No shared files mode enabled, IPC will be disabled 00:04:12.861 EAL: Selected IOVA mode 'PA' 00:04:12.861 EAL: Probing VFIO support... 00:04:12.861 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:12.861 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:12.861 EAL: Ask a virtual area of 0x2e000 bytes 00:04:12.861 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:12.861 EAL: Setting up physically contiguous memory... 00:04:12.861 EAL: Setting maximum number of open files to 524288 00:04:12.861 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:12.861 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:12.861 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.861 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:12.861 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.861 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.861 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:12.861 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:12.861 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.861 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:12.861 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.861 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.861 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:12.861 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:12.861 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.861 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:12.861 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.861 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.861 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:12.861 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:12.861 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.861 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:12.861 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.861 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.861 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:12.861 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:12.861 EAL: Hugepages will be freed exactly as allocated. 00:04:12.861 EAL: No shared files mode enabled, IPC is disabled 00:04:12.861 EAL: No shared files mode enabled, IPC is disabled 00:04:12.861 EAL: TSC frequency is ~2200000 KHz 00:04:12.861 EAL: Main lcore 0 is ready (tid=7f6e7bd0da40;cpuset=[0]) 00:04:12.861 EAL: Trying to obtain current memory policy. 00:04:12.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.861 EAL: Restoring previous memory policy: 0 00:04:12.861 EAL: request: mp_malloc_sync 00:04:12.861 EAL: No shared files mode enabled, IPC is disabled 00:04:12.861 EAL: Heap on socket 0 was expanded by 2MB 00:04:12.861 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.120 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:13.120 EAL: Mem event callback 'spdk:(nil)' registered 00:04:13.120 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:13.120 00:04:13.120 00:04:13.120 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.120 http://cunit.sourceforge.net/ 00:04:13.120 00:04:13.120 00:04:13.120 Suite: components_suite 00:04:13.378 Test: vtophys_malloc_test ...passed 00:04:13.378 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.378 EAL: Restoring previous memory policy: 4 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.378 EAL: Trying to obtain current memory policy. 00:04:13.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.378 EAL: Restoring previous memory policy: 4 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.378 EAL: Trying to obtain current memory policy. 00:04:13.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.378 EAL: Restoring previous memory policy: 4 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.378 EAL: Trying to obtain current memory policy. 00:04:13.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.378 EAL: Restoring previous memory policy: 4 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.378 EAL: Trying to obtain current memory policy. 00:04:13.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.378 EAL: Restoring previous memory policy: 4 00:04:13.378 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.378 EAL: request: mp_malloc_sync 00:04:13.378 EAL: No shared files mode enabled, IPC is disabled 00:04:13.378 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.637 EAL: request: mp_malloc_sync 00:04:13.637 EAL: No shared files mode enabled, IPC is disabled 00:04:13.637 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.637 EAL: Trying to obtain current memory policy. 00:04:13.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.637 EAL: Restoring previous memory policy: 4 00:04:13.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.637 EAL: request: mp_malloc_sync 00:04:13.637 EAL: No shared files mode enabled, IPC is disabled 00:04:13.637 EAL: Heap on socket 0 was expanded by 66MB 00:04:13.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.637 EAL: request: mp_malloc_sync 00:04:13.637 EAL: No shared files mode enabled, IPC is disabled 00:04:13.637 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.637 EAL: Trying to obtain current memory policy. 00:04:13.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.637 EAL: Restoring previous memory policy: 4 00:04:13.637 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.637 EAL: request: mp_malloc_sync 00:04:13.637 EAL: No shared files mode enabled, IPC is disabled 00:04:13.637 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.897 EAL: request: mp_malloc_sync 00:04:13.897 EAL: No shared files mode enabled, IPC is disabled 00:04:13.897 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.897 EAL: Trying to obtain current memory policy. 00:04:13.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.156 EAL: Restoring previous memory policy: 4 00:04:14.156 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.156 EAL: request: mp_malloc_sync 00:04:14.156 EAL: No shared files mode enabled, IPC is disabled 00:04:14.156 EAL: Heap on socket 0 was expanded by 258MB 00:04:14.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.414 EAL: request: mp_malloc_sync 00:04:14.414 EAL: No shared files mode enabled, IPC is disabled 00:04:14.414 EAL: Heap on socket 0 was shrunk by 258MB 00:04:14.672 EAL: Trying to obtain current memory policy. 00:04:14.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.672 EAL: Restoring previous memory policy: 4 00:04:14.672 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.672 EAL: request: mp_malloc_sync 00:04:14.672 EAL: No shared files mode enabled, IPC is disabled 00:04:14.672 EAL: Heap on socket 0 was expanded by 514MB 00:04:15.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.499 EAL: request: mp_malloc_sync 00:04:15.499 EAL: No shared files mode enabled, IPC is disabled 00:04:15.499 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.066 EAL: Trying to obtain current memory policy. 00:04:16.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.066 EAL: Restoring previous memory policy: 4 00:04:16.066 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.066 EAL: request: mp_malloc_sync 00:04:16.066 EAL: No shared files mode enabled, IPC is disabled 00:04:16.066 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.444 EAL: request: mp_malloc_sync 00:04:17.444 EAL: No shared files mode enabled, IPC is disabled 00:04:17.444 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:18.379 passed 00:04:18.379 00:04:18.380 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.380 suites 1 1 n/a 0 0 00:04:18.380 tests 2 2 2 0 0 00:04:18.380 asserts 5705 5705 5705 0 n/a 00:04:18.380 00:04:18.380 Elapsed time = 5.385 seconds 00:04:18.380 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.380 EAL: request: mp_malloc_sync 00:04:18.380 EAL: No shared files mode enabled, IPC is disabled 00:04:18.380 EAL: Heap on socket 0 was shrunk by 2MB 00:04:18.380 EAL: No shared files mode enabled, IPC is disabled 00:04:18.380 EAL: No shared files mode enabled, IPC is disabled 00:04:18.380 EAL: No shared files mode enabled, IPC is disabled 00:04:18.380 ************************************ 00:04:18.380 END TEST env_vtophys 00:04:18.380 ************************************ 00:04:18.380 00:04:18.380 real 0m5.704s 00:04:18.380 user 0m4.945s 00:04:18.380 sys 0m0.604s 00:04:18.380 08:04:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.380 08:04:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:18.639 08:04:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:18.639 08:04:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.639 08:04:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.639 08:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.639 ************************************ 00:04:18.639 START TEST env_pci 00:04:18.639 ************************************ 00:04:18.639 08:04:23 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:18.639 00:04:18.639 00:04:18.639 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.639 http://cunit.sourceforge.net/ 00:04:18.639 00:04:18.639 00:04:18.639 Suite: pci 00:04:18.639 Test: pci_hook ...[2024-11-17 08:04:23.464433] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57618 has claimed it 00:04:18.639 EAL: Cannot find device (10000:00:01.0) 00:04:18.639 passed 00:04:18.639 00:04:18.639 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.639 suites 1 1 n/a 0 0 00:04:18.639 tests 1 1 1 0 0 00:04:18.639 asserts 25 25 25 0 n/a 00:04:18.639 00:04:18.639 Elapsed time = 0.008 seconds 00:04:18.639 EAL: Failed to attach device on primary process 00:04:18.639 00:04:18.639 real 0m0.079s 00:04:18.639 user 0m0.039s 00:04:18.639 sys 0m0.039s 00:04:18.639 08:04:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.639 08:04:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:18.639 ************************************ 00:04:18.639 END TEST env_pci 00:04:18.639 ************************************ 00:04:18.639 08:04:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:18.639 08:04:23 env -- env/env.sh@15 -- # uname 00:04:18.639 08:04:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:18.639 08:04:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:18.639 08:04:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.639 08:04:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:18.639 08:04:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.639 08:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.639 ************************************ 00:04:18.639 START TEST env_dpdk_post_init 00:04:18.639 ************************************ 00:04:18.639 08:04:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:18.639 EAL: Detected CPU lcores: 10 00:04:18.639 EAL: Detected NUMA nodes: 1 00:04:18.639 EAL: Detected shared linkage of DPDK 00:04:18.898 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:18.898 EAL: Selected IOVA mode 'PA' 00:04:18.898 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:18.898 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:18.898 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:18.898 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:18.898 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:18.898 Starting DPDK initialization... 00:04:18.898 Starting SPDK post initialization... 00:04:18.898 SPDK NVMe probe 00:04:18.898 Attaching to 0000:00:10.0 00:04:18.898 Attaching to 0000:00:11.0 00:04:18.898 Attaching to 0000:00:12.0 00:04:18.898 Attaching to 0000:00:13.0 00:04:18.898 Attached to 0000:00:10.0 00:04:18.898 Attached to 0000:00:11.0 00:04:18.898 Attached to 0000:00:13.0 00:04:18.898 Attached to 0000:00:12.0 00:04:18.898 Cleaning up... 00:04:18.898 ************************************ 00:04:18.898 END TEST env_dpdk_post_init 00:04:18.898 ************************************ 00:04:18.898 00:04:18.898 real 0m0.296s 00:04:18.898 user 0m0.097s 00:04:18.898 sys 0m0.100s 00:04:18.898 08:04:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.898 08:04:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:18.898 08:04:23 env -- env/env.sh@26 -- # uname 00:04:18.898 08:04:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:18.898 08:04:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:18.898 08:04:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.898 08:04:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.898 08:04:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.158 ************************************ 00:04:19.158 START TEST env_mem_callbacks 00:04:19.158 ************************************ 00:04:19.158 08:04:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.158 EAL: Detected CPU lcores: 10 00:04:19.158 EAL: Detected NUMA nodes: 1 00:04:19.158 EAL: Detected shared linkage of DPDK 00:04:19.158 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.158 EAL: Selected IOVA mode 'PA' 00:04:19.158 00:04:19.158 00:04:19.158 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.158 http://cunit.sourceforge.net/ 00:04:19.158 00:04:19.158 00:04:19.158 Suite: memory 00:04:19.158 Test: test ... 00:04:19.158 register 0x200000200000 2097152 00:04:19.158 malloc 3145728 00:04:19.158 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.158 register 0x200000400000 4194304 00:04:19.158 buf 0x2000004fffc0 len 3145728 PASSED 00:04:19.158 malloc 64 00:04:19.158 buf 0x2000004ffec0 len 64 PASSED 00:04:19.158 malloc 4194304 00:04:19.158 register 0x200000800000 6291456 00:04:19.158 buf 0x2000009fffc0 len 4194304 PASSED 00:04:19.158 free 0x2000004fffc0 3145728 00:04:19.158 free 0x2000004ffec0 64 00:04:19.158 unregister 0x200000400000 4194304 PASSED 00:04:19.158 free 0x2000009fffc0 4194304 00:04:19.158 unregister 0x200000800000 6291456 PASSED 00:04:19.158 malloc 8388608 00:04:19.158 register 0x200000400000 10485760 00:04:19.158 buf 0x2000005fffc0 len 8388608 PASSED 00:04:19.158 free 0x2000005fffc0 8388608 00:04:19.158 unregister 0x200000400000 10485760 PASSED 00:04:19.158 passed 00:04:19.158 00:04:19.158 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.158 suites 1 1 n/a 0 0 00:04:19.158 tests 1 1 1 0 0 00:04:19.158 asserts 15 15 15 0 n/a 00:04:19.158 00:04:19.158 Elapsed time = 0.050 seconds 00:04:19.417 00:04:19.417 real 0m0.255s 00:04:19.417 user 0m0.084s 00:04:19.417 sys 0m0.068s 00:04:19.417 08:04:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.417 08:04:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:19.417 ************************************ 00:04:19.417 END TEST env_mem_callbacks 00:04:19.417 ************************************ 00:04:19.417 ************************************ 00:04:19.417 END TEST env 00:04:19.417 ************************************ 00:04:19.417 00:04:19.417 real 0m7.204s 00:04:19.417 user 0m5.714s 00:04:19.417 sys 0m1.096s 00:04:19.417 08:04:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.417 08:04:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.417 08:04:24 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.417 08:04:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.417 08:04:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.417 08:04:24 -- common/autotest_common.sh@10 -- # set +x 00:04:19.417 ************************************ 00:04:19.417 START TEST rpc 00:04:19.417 ************************************ 00:04:19.417 08:04:24 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.417 * Looking for test storage... 00:04:19.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.417 08:04:24 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.417 08:04:24 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.417 08:04:24 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.676 08:04:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.676 08:04:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.676 08:04:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.676 08:04:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.676 08:04:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.676 08:04:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.676 08:04:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.676 08:04:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.676 08:04:24 rpc -- scripts/common.sh@345 -- # : 1 00:04:19.676 08:04:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.676 08:04:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.676 08:04:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.676 08:04:24 rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.676 08:04:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.676 08:04:24 rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.676 08:04:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.676 08:04:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.676 08:04:24 rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.676 08:04:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.676 08:04:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.676 08:04:24 rpc -- scripts/common.sh@368 -- # return 0 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.676 --rc genhtml_branch_coverage=1 00:04:19.676 --rc genhtml_function_coverage=1 00:04:19.676 --rc genhtml_legend=1 00:04:19.676 --rc geninfo_all_blocks=1 00:04:19.676 --rc geninfo_unexecuted_blocks=1 00:04:19.676 00:04:19.676 ' 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.676 --rc genhtml_branch_coverage=1 00:04:19.676 --rc genhtml_function_coverage=1 00:04:19.676 --rc genhtml_legend=1 00:04:19.676 --rc geninfo_all_blocks=1 00:04:19.676 --rc geninfo_unexecuted_blocks=1 00:04:19.676 00:04:19.676 ' 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.676 --rc genhtml_branch_coverage=1 00:04:19.676 --rc genhtml_function_coverage=1 00:04:19.676 --rc genhtml_legend=1 00:04:19.676 --rc geninfo_all_blocks=1 00:04:19.676 --rc geninfo_unexecuted_blocks=1 00:04:19.676 00:04:19.676 ' 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.676 --rc genhtml_branch_coverage=1 00:04:19.676 --rc genhtml_function_coverage=1 00:04:19.676 --rc genhtml_legend=1 00:04:19.676 --rc geninfo_all_blocks=1 00:04:19.676 --rc geninfo_unexecuted_blocks=1 00:04:19.676 00:04:19.676 ' 00:04:19.676 08:04:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57745 00:04:19.676 08:04:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.676 08:04:24 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:19.676 08:04:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57745 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@835 -- # '[' -z 57745 ']' 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.676 08:04:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.676 [2024-11-17 08:04:24.599111] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:19.676 [2024-11-17 08:04:24.599554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57745 ] 00:04:19.935 [2024-11-17 08:04:24.779528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.935 [2024-11-17 08:04:24.858451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:19.935 [2024-11-17 08:04:24.858513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57745' to capture a snapshot of events at runtime. 00:04:19.935 [2024-11-17 08:04:24.858527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:19.935 [2024-11-17 08:04:24.858538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:19.935 [2024-11-17 08:04:24.858547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57745 for offline analysis/debug. 00:04:19.935 [2024-11-17 08:04:24.859577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.872 08:04:25 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.872 08:04:25 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:20.872 08:04:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.872 08:04:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.872 08:04:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.872 08:04:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.872 08:04:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.872 08:04:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.872 08:04:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.872 ************************************ 00:04:20.872 START TEST rpc_integrity 00:04:20.872 ************************************ 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.872 { 00:04:20.872 "name": "Malloc0", 00:04:20.872 "aliases": [ 00:04:20.872 "30157b84-5056-460a-b7e9-fe9df90703ef" 00:04:20.872 ], 00:04:20.872 "product_name": "Malloc disk", 00:04:20.872 "block_size": 512, 00:04:20.872 "num_blocks": 16384, 00:04:20.872 "uuid": "30157b84-5056-460a-b7e9-fe9df90703ef", 00:04:20.872 "assigned_rate_limits": { 00:04:20.872 "rw_ios_per_sec": 0, 00:04:20.872 "rw_mbytes_per_sec": 0, 00:04:20.872 "r_mbytes_per_sec": 0, 00:04:20.872 "w_mbytes_per_sec": 0 00:04:20.872 }, 00:04:20.872 "claimed": false, 00:04:20.872 "zoned": false, 00:04:20.872 "supported_io_types": { 00:04:20.872 "read": true, 00:04:20.872 "write": true, 00:04:20.872 "unmap": true, 00:04:20.872 "flush": true, 00:04:20.872 "reset": true, 00:04:20.872 "nvme_admin": false, 00:04:20.872 "nvme_io": false, 00:04:20.872 "nvme_io_md": false, 00:04:20.872 "write_zeroes": true, 00:04:20.872 "zcopy": true, 00:04:20.872 "get_zone_info": false, 00:04:20.872 "zone_management": false, 00:04:20.872 "zone_append": false, 00:04:20.872 "compare": false, 00:04:20.872 "compare_and_write": false, 00:04:20.872 "abort": true, 00:04:20.872 "seek_hole": false, 00:04:20.872 "seek_data": false, 00:04:20.872 "copy": true, 00:04:20.872 "nvme_iov_md": false 00:04:20.872 }, 00:04:20.872 "memory_domains": [ 00:04:20.872 { 00:04:20.872 "dma_device_id": "system", 00:04:20.872 "dma_device_type": 1 00:04:20.872 }, 00:04:20.872 { 00:04:20.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.872 "dma_device_type": 2 00:04:20.872 } 00:04:20.872 ], 00:04:20.872 "driver_specific": {} 00:04:20.872 } 00:04:20.872 ]' 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.872 [2024-11-17 08:04:25.697408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.872 [2024-11-17 08:04:25.697515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.872 [2024-11-17 08:04:25.697544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:20.872 [2024-11-17 08:04:25.697559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.872 [2024-11-17 08:04:25.700129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.872 [2024-11-17 08:04:25.700196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.872 Passthru0 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.872 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.872 { 00:04:20.872 "name": "Malloc0", 00:04:20.872 "aliases": [ 00:04:20.872 "30157b84-5056-460a-b7e9-fe9df90703ef" 00:04:20.872 ], 00:04:20.872 "product_name": "Malloc disk", 00:04:20.872 "block_size": 512, 00:04:20.872 "num_blocks": 16384, 00:04:20.872 "uuid": "30157b84-5056-460a-b7e9-fe9df90703ef", 00:04:20.872 "assigned_rate_limits": { 00:04:20.872 "rw_ios_per_sec": 0, 00:04:20.872 "rw_mbytes_per_sec": 0, 00:04:20.872 "r_mbytes_per_sec": 0, 00:04:20.872 "w_mbytes_per_sec": 0 00:04:20.872 }, 00:04:20.872 "claimed": true, 00:04:20.872 "claim_type": "exclusive_write", 00:04:20.872 "zoned": false, 00:04:20.872 "supported_io_types": { 00:04:20.872 "read": true, 00:04:20.872 "write": true, 00:04:20.872 "unmap": true, 00:04:20.872 "flush": true, 00:04:20.872 "reset": true, 00:04:20.872 "nvme_admin": false, 00:04:20.872 "nvme_io": false, 00:04:20.872 "nvme_io_md": false, 00:04:20.872 "write_zeroes": true, 00:04:20.872 "zcopy": true, 00:04:20.872 "get_zone_info": false, 00:04:20.872 "zone_management": false, 00:04:20.872 "zone_append": false, 00:04:20.872 "compare": false, 00:04:20.872 "compare_and_write": false, 00:04:20.872 "abort": true, 00:04:20.872 "seek_hole": false, 00:04:20.872 "seek_data": false, 00:04:20.872 "copy": true, 00:04:20.872 "nvme_iov_md": false 00:04:20.872 }, 00:04:20.872 "memory_domains": [ 00:04:20.872 { 00:04:20.872 "dma_device_id": "system", 00:04:20.872 "dma_device_type": 1 00:04:20.872 }, 00:04:20.872 { 00:04:20.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.872 "dma_device_type": 2 00:04:20.872 } 00:04:20.872 ], 00:04:20.872 "driver_specific": {} 00:04:20.872 }, 00:04:20.872 { 00:04:20.872 "name": "Passthru0", 00:04:20.872 "aliases": [ 00:04:20.872 "634a98c2-be77-5c0a-af1d-bdb4c6b05f29" 00:04:20.872 ], 00:04:20.872 "product_name": "passthru", 00:04:20.872 "block_size": 512, 00:04:20.872 "num_blocks": 16384, 00:04:20.872 "uuid": "634a98c2-be77-5c0a-af1d-bdb4c6b05f29", 00:04:20.872 "assigned_rate_limits": { 00:04:20.872 "rw_ios_per_sec": 0, 00:04:20.872 "rw_mbytes_per_sec": 0, 00:04:20.872 "r_mbytes_per_sec": 0, 00:04:20.872 "w_mbytes_per_sec": 0 00:04:20.872 }, 00:04:20.872 "claimed": false, 00:04:20.872 "zoned": false, 00:04:20.872 "supported_io_types": { 00:04:20.872 "read": true, 00:04:20.872 "write": true, 00:04:20.872 "unmap": true, 00:04:20.872 "flush": true, 00:04:20.872 "reset": true, 00:04:20.872 "nvme_admin": false, 00:04:20.872 "nvme_io": false, 00:04:20.872 "nvme_io_md": false, 00:04:20.872 "write_zeroes": true, 00:04:20.872 "zcopy": true, 00:04:20.872 "get_zone_info": false, 00:04:20.872 "zone_management": false, 00:04:20.872 "zone_append": false, 00:04:20.872 "compare": false, 00:04:20.872 "compare_and_write": false, 00:04:20.872 "abort": true, 00:04:20.872 "seek_hole": false, 00:04:20.872 "seek_data": false, 00:04:20.872 "copy": true, 00:04:20.872 "nvme_iov_md": false 00:04:20.872 }, 00:04:20.872 "memory_domains": [ 00:04:20.872 { 00:04:20.872 "dma_device_id": "system", 00:04:20.872 "dma_device_type": 1 00:04:20.872 }, 00:04:20.872 { 00:04:20.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.872 "dma_device_type": 2 00:04:20.872 } 00:04:20.872 ], 00:04:20.872 "driver_specific": { 00:04:20.872 "passthru": { 00:04:20.872 "name": "Passthru0", 00:04:20.872 "base_bdev_name": "Malloc0" 00:04:20.872 } 00:04:20.872 } 00:04:20.872 } 00:04:20.872 ]' 00:04:20.872 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.873 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.873 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.873 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.873 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.873 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.873 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.873 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.132 ************************************ 00:04:21.132 END TEST rpc_integrity 00:04:21.132 ************************************ 00:04:21.132 08:04:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.132 00:04:21.132 real 0m0.348s 00:04:21.132 user 0m0.220s 00:04:21.132 sys 0m0.039s 00:04:21.132 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.132 08:04:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 08:04:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:21.132 08:04:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.132 08:04:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.132 08:04:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 ************************************ 00:04:21.132 START TEST rpc_plugins 00:04:21.132 ************************************ 00:04:21.132 08:04:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:21.132 08:04:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:21.132 08:04:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.132 08:04:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 08:04:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.132 08:04:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:21.132 08:04:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:21.132 08:04:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.132 08:04:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 08:04:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.132 08:04:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:21.132 { 00:04:21.132 "name": "Malloc1", 00:04:21.132 "aliases": [ 00:04:21.132 "333f1964-fc44-411e-a776-064b5450fb8a" 00:04:21.132 ], 00:04:21.132 "product_name": "Malloc disk", 00:04:21.132 "block_size": 4096, 00:04:21.132 "num_blocks": 256, 00:04:21.132 "uuid": "333f1964-fc44-411e-a776-064b5450fb8a", 00:04:21.132 "assigned_rate_limits": { 00:04:21.132 "rw_ios_per_sec": 0, 00:04:21.132 "rw_mbytes_per_sec": 0, 00:04:21.132 "r_mbytes_per_sec": 0, 00:04:21.132 "w_mbytes_per_sec": 0 00:04:21.132 }, 00:04:21.132 "claimed": false, 00:04:21.132 "zoned": false, 00:04:21.132 "supported_io_types": { 00:04:21.132 "read": true, 00:04:21.132 "write": true, 00:04:21.132 "unmap": true, 00:04:21.132 "flush": true, 00:04:21.132 "reset": true, 00:04:21.132 "nvme_admin": false, 00:04:21.132 "nvme_io": false, 00:04:21.132 "nvme_io_md": false, 00:04:21.132 "write_zeroes": true, 00:04:21.132 "zcopy": true, 00:04:21.132 "get_zone_info": false, 00:04:21.132 "zone_management": false, 00:04:21.132 "zone_append": false, 00:04:21.132 "compare": false, 00:04:21.132 "compare_and_write": false, 00:04:21.132 "abort": true, 00:04:21.132 "seek_hole": false, 00:04:21.132 "seek_data": false, 00:04:21.132 "copy": true, 00:04:21.132 "nvme_iov_md": false 00:04:21.132 }, 00:04:21.132 "memory_domains": [ 00:04:21.132 { 00:04:21.132 "dma_device_id": "system", 00:04:21.132 "dma_device_type": 1 00:04:21.132 }, 00:04:21.132 { 00:04:21.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.132 "dma_device_type": 2 00:04:21.132 } 00:04:21.132 ], 00:04:21.132 "driver_specific": {} 00:04:21.132 } 00:04:21.132 ]' 00:04:21.132 08:04:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:21.132 08:04:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:21.132 08:04:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.132 08:04:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.132 08:04:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:21.132 08:04:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:21.132 ************************************ 00:04:21.132 END TEST rpc_plugins 00:04:21.132 ************************************ 00:04:21.132 08:04:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:21.132 00:04:21.132 real 0m0.171s 00:04:21.132 user 0m0.113s 00:04:21.132 sys 0m0.019s 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.132 08:04:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.391 08:04:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:21.391 08:04:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.391 08:04:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.391 08:04:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.391 ************************************ 00:04:21.391 START TEST rpc_trace_cmd_test 00:04:21.391 ************************************ 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:21.391 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57745", 00:04:21.391 "tpoint_group_mask": "0x8", 00:04:21.391 "iscsi_conn": { 00:04:21.391 "mask": "0x2", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "scsi": { 00:04:21.391 "mask": "0x4", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "bdev": { 00:04:21.391 "mask": "0x8", 00:04:21.391 "tpoint_mask": "0xffffffffffffffff" 00:04:21.391 }, 00:04:21.391 "nvmf_rdma": { 00:04:21.391 "mask": "0x10", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "nvmf_tcp": { 00:04:21.391 "mask": "0x20", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "ftl": { 00:04:21.391 "mask": "0x40", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "blobfs": { 00:04:21.391 "mask": "0x80", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "dsa": { 00:04:21.391 "mask": "0x200", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "thread": { 00:04:21.391 "mask": "0x400", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "nvme_pcie": { 00:04:21.391 "mask": "0x800", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "iaa": { 00:04:21.391 "mask": "0x1000", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "nvme_tcp": { 00:04:21.391 "mask": "0x2000", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "bdev_nvme": { 00:04:21.391 "mask": "0x4000", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "sock": { 00:04:21.391 "mask": "0x8000", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "blob": { 00:04:21.391 "mask": "0x10000", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "bdev_raid": { 00:04:21.391 "mask": "0x20000", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 }, 00:04:21.391 "scheduler": { 00:04:21.391 "mask": "0x40000", 00:04:21.391 "tpoint_mask": "0x0" 00:04:21.391 } 00:04:21.391 }' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:21.391 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:21.651 ************************************ 00:04:21.651 END TEST rpc_trace_cmd_test 00:04:21.651 ************************************ 00:04:21.651 08:04:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:21.651 00:04:21.651 real 0m0.248s 00:04:21.651 user 0m0.215s 00:04:21.651 sys 0m0.026s 00:04:21.651 08:04:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.651 08:04:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.651 08:04:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.651 08:04:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.651 08:04:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.651 08:04:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.651 08:04:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.651 08:04:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.651 ************************************ 00:04:21.651 START TEST rpc_daemon_integrity 00:04:21.651 ************************************ 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.651 { 00:04:21.651 "name": "Malloc2", 00:04:21.651 "aliases": [ 00:04:21.651 "412c2441-093a-46c8-b590-8e6e7b772a06" 00:04:21.651 ], 00:04:21.651 "product_name": "Malloc disk", 00:04:21.651 "block_size": 512, 00:04:21.651 "num_blocks": 16384, 00:04:21.651 "uuid": "412c2441-093a-46c8-b590-8e6e7b772a06", 00:04:21.651 "assigned_rate_limits": { 00:04:21.651 "rw_ios_per_sec": 0, 00:04:21.651 "rw_mbytes_per_sec": 0, 00:04:21.651 "r_mbytes_per_sec": 0, 00:04:21.651 "w_mbytes_per_sec": 0 00:04:21.651 }, 00:04:21.651 "claimed": false, 00:04:21.651 "zoned": false, 00:04:21.651 "supported_io_types": { 00:04:21.651 "read": true, 00:04:21.651 "write": true, 00:04:21.651 "unmap": true, 00:04:21.651 "flush": true, 00:04:21.651 "reset": true, 00:04:21.651 "nvme_admin": false, 00:04:21.651 "nvme_io": false, 00:04:21.651 "nvme_io_md": false, 00:04:21.651 "write_zeroes": true, 00:04:21.651 "zcopy": true, 00:04:21.651 "get_zone_info": false, 00:04:21.651 "zone_management": false, 00:04:21.651 "zone_append": false, 00:04:21.651 "compare": false, 00:04:21.651 "compare_and_write": false, 00:04:21.651 "abort": true, 00:04:21.651 "seek_hole": false, 00:04:21.651 "seek_data": false, 00:04:21.651 "copy": true, 00:04:21.651 "nvme_iov_md": false 00:04:21.651 }, 00:04:21.651 "memory_domains": [ 00:04:21.651 { 00:04:21.651 "dma_device_id": "system", 00:04:21.651 "dma_device_type": 1 00:04:21.651 }, 00:04:21.651 { 00:04:21.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.651 "dma_device_type": 2 00:04:21.651 } 00:04:21.651 ], 00:04:21.651 "driver_specific": {} 00:04:21.651 } 00:04:21.651 ]' 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.651 [2024-11-17 08:04:26.619058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.651 [2024-11-17 08:04:26.619170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.651 [2024-11-17 08:04:26.619198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:21.651 [2024-11-17 08:04:26.619213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.651 [2024-11-17 08:04:26.621781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.651 [2024-11-17 08:04:26.621946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.651 Passthru0 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.651 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.652 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.652 { 00:04:21.652 "name": "Malloc2", 00:04:21.652 "aliases": [ 00:04:21.652 "412c2441-093a-46c8-b590-8e6e7b772a06" 00:04:21.652 ], 00:04:21.652 "product_name": "Malloc disk", 00:04:21.652 "block_size": 512, 00:04:21.652 "num_blocks": 16384, 00:04:21.652 "uuid": "412c2441-093a-46c8-b590-8e6e7b772a06", 00:04:21.652 "assigned_rate_limits": { 00:04:21.652 "rw_ios_per_sec": 0, 00:04:21.652 "rw_mbytes_per_sec": 0, 00:04:21.652 "r_mbytes_per_sec": 0, 00:04:21.652 "w_mbytes_per_sec": 0 00:04:21.652 }, 00:04:21.652 "claimed": true, 00:04:21.652 "claim_type": "exclusive_write", 00:04:21.652 "zoned": false, 00:04:21.652 "supported_io_types": { 00:04:21.652 "read": true, 00:04:21.652 "write": true, 00:04:21.652 "unmap": true, 00:04:21.652 "flush": true, 00:04:21.652 "reset": true, 00:04:21.652 "nvme_admin": false, 00:04:21.652 "nvme_io": false, 00:04:21.652 "nvme_io_md": false, 00:04:21.652 "write_zeroes": true, 00:04:21.652 "zcopy": true, 00:04:21.652 "get_zone_info": false, 00:04:21.652 "zone_management": false, 00:04:21.652 "zone_append": false, 00:04:21.652 "compare": false, 00:04:21.652 "compare_and_write": false, 00:04:21.652 "abort": true, 00:04:21.652 "seek_hole": false, 00:04:21.652 "seek_data": false, 00:04:21.652 "copy": true, 00:04:21.652 "nvme_iov_md": false 00:04:21.652 }, 00:04:21.652 "memory_domains": [ 00:04:21.652 { 00:04:21.652 "dma_device_id": "system", 00:04:21.652 "dma_device_type": 1 00:04:21.652 }, 00:04:21.652 { 00:04:21.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.652 "dma_device_type": 2 00:04:21.652 } 00:04:21.652 ], 00:04:21.652 "driver_specific": {} 00:04:21.652 }, 00:04:21.652 { 00:04:21.652 "name": "Passthru0", 00:04:21.652 "aliases": [ 00:04:21.652 "56ab185e-3266-5e6b-9c91-8840068e5d37" 00:04:21.652 ], 00:04:21.652 "product_name": "passthru", 00:04:21.652 "block_size": 512, 00:04:21.652 "num_blocks": 16384, 00:04:21.652 "uuid": "56ab185e-3266-5e6b-9c91-8840068e5d37", 00:04:21.652 "assigned_rate_limits": { 00:04:21.652 "rw_ios_per_sec": 0, 00:04:21.652 "rw_mbytes_per_sec": 0, 00:04:21.652 "r_mbytes_per_sec": 0, 00:04:21.652 "w_mbytes_per_sec": 0 00:04:21.652 }, 00:04:21.652 "claimed": false, 00:04:21.652 "zoned": false, 00:04:21.652 "supported_io_types": { 00:04:21.652 "read": true, 00:04:21.652 "write": true, 00:04:21.652 "unmap": true, 00:04:21.652 "flush": true, 00:04:21.652 "reset": true, 00:04:21.652 "nvme_admin": false, 00:04:21.652 "nvme_io": false, 00:04:21.652 "nvme_io_md": false, 00:04:21.652 "write_zeroes": true, 00:04:21.652 "zcopy": true, 00:04:21.652 "get_zone_info": false, 00:04:21.652 "zone_management": false, 00:04:21.652 "zone_append": false, 00:04:21.652 "compare": false, 00:04:21.652 "compare_and_write": false, 00:04:21.652 "abort": true, 00:04:21.652 "seek_hole": false, 00:04:21.652 "seek_data": false, 00:04:21.652 "copy": true, 00:04:21.652 "nvme_iov_md": false 00:04:21.652 }, 00:04:21.652 "memory_domains": [ 00:04:21.652 { 00:04:21.652 "dma_device_id": "system", 00:04:21.652 "dma_device_type": 1 00:04:21.652 }, 00:04:21.652 { 00:04:21.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.652 "dma_device_type": 2 00:04:21.652 } 00:04:21.652 ], 00:04:21.652 "driver_specific": { 00:04:21.652 "passthru": { 00:04:21.652 "name": "Passthru0", 00:04:21.652 "base_bdev_name": "Malloc2" 00:04:21.652 } 00:04:21.652 } 00:04:21.652 } 00:04:21.652 ]' 00:04:21.652 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.912 ************************************ 00:04:21.912 END TEST rpc_daemon_integrity 00:04:21.912 ************************************ 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.912 00:04:21.912 real 0m0.346s 00:04:21.912 user 0m0.221s 00:04:21.912 sys 0m0.037s 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.912 08:04:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.912 08:04:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.912 08:04:26 rpc -- rpc/rpc.sh@84 -- # killprocess 57745 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 57745 ']' 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@958 -- # kill -0 57745 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@959 -- # uname 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57745 00:04:21.912 killing process with pid 57745 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57745' 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@973 -- # kill 57745 00:04:21.912 08:04:26 rpc -- common/autotest_common.sh@978 -- # wait 57745 00:04:23.866 00:04:23.866 real 0m4.220s 00:04:23.866 user 0m5.034s 00:04:23.866 sys 0m0.717s 00:04:23.866 08:04:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.866 ************************************ 00:04:23.866 END TEST rpc 00:04:23.866 08:04:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.866 ************************************ 00:04:23.866 08:04:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.866 08:04:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.866 08:04:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.866 08:04:28 -- common/autotest_common.sh@10 -- # set +x 00:04:23.866 ************************************ 00:04:23.866 START TEST skip_rpc 00:04:23.866 ************************************ 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.866 * Looking for test storage... 00:04:23.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.866 08:04:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.866 --rc genhtml_branch_coverage=1 00:04:23.866 --rc genhtml_function_coverage=1 00:04:23.866 --rc genhtml_legend=1 00:04:23.866 --rc geninfo_all_blocks=1 00:04:23.866 --rc geninfo_unexecuted_blocks=1 00:04:23.866 00:04:23.866 ' 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.866 --rc genhtml_branch_coverage=1 00:04:23.866 --rc genhtml_function_coverage=1 00:04:23.866 --rc genhtml_legend=1 00:04:23.866 --rc geninfo_all_blocks=1 00:04:23.866 --rc geninfo_unexecuted_blocks=1 00:04:23.866 00:04:23.866 ' 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.866 --rc genhtml_branch_coverage=1 00:04:23.866 --rc genhtml_function_coverage=1 00:04:23.866 --rc genhtml_legend=1 00:04:23.866 --rc geninfo_all_blocks=1 00:04:23.866 --rc geninfo_unexecuted_blocks=1 00:04:23.866 00:04:23.866 ' 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.866 --rc genhtml_branch_coverage=1 00:04:23.866 --rc genhtml_function_coverage=1 00:04:23.866 --rc genhtml_legend=1 00:04:23.866 --rc geninfo_all_blocks=1 00:04:23.866 --rc geninfo_unexecuted_blocks=1 00:04:23.866 00:04:23.866 ' 00:04:23.866 08:04:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.866 08:04:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.866 08:04:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.866 08:04:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.866 ************************************ 00:04:23.866 START TEST skip_rpc 00:04:23.867 ************************************ 00:04:23.867 08:04:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.867 08:04:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57963 00:04:23.867 08:04:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.867 08:04:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.867 08:04:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.867 [2024-11-17 08:04:28.874948] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:23.867 [2024-11-17 08:04:28.875358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57963 ] 00:04:24.125 [2024-11-17 08:04:29.054371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.125 [2024-11-17 08:04:29.133553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57963 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57963 ']' 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57963 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57963 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57963' 00:04:29.396 killing process with pid 57963 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57963 00:04:29.396 08:04:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57963 00:04:30.772 00:04:30.772 real 0m6.656s 00:04:30.772 user 0m6.265s 00:04:30.772 sys 0m0.297s 00:04:30.772 08:04:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.772 ************************************ 00:04:30.772 END TEST skip_rpc 00:04:30.772 ************************************ 00:04:30.772 08:04:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.772 08:04:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.772 08:04:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.772 08:04:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.772 08:04:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.772 ************************************ 00:04:30.772 START TEST skip_rpc_with_json 00:04:30.772 ************************************ 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58061 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58061 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58061 ']' 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.772 08:04:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.772 [2024-11-17 08:04:35.540714] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:30.772 [2024-11-17 08:04:35.540996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58061 ] 00:04:30.772 [2024-11-17 08:04:35.701363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.772 [2024-11-17 08:04:35.781087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.708 [2024-11-17 08:04:36.485792] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.708 request: 00:04:31.708 { 00:04:31.708 "trtype": "tcp", 00:04:31.708 "method": "nvmf_get_transports", 00:04:31.708 "req_id": 1 00:04:31.708 } 00:04:31.708 Got JSON-RPC error response 00:04:31.708 response: 00:04:31.708 { 00:04:31.708 "code": -19, 00:04:31.708 "message": "No such device" 00:04:31.708 } 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.708 [2024-11-17 08:04:36.497907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.708 08:04:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.708 { 00:04:31.708 "subsystems": [ 00:04:31.708 { 00:04:31.708 "subsystem": "fsdev", 00:04:31.708 "config": [ 00:04:31.708 { 00:04:31.708 "method": "fsdev_set_opts", 00:04:31.708 "params": { 00:04:31.708 "fsdev_io_pool_size": 65535, 00:04:31.708 "fsdev_io_cache_size": 256 00:04:31.708 } 00:04:31.708 } 00:04:31.708 ] 00:04:31.708 }, 00:04:31.708 { 00:04:31.708 "subsystem": "keyring", 00:04:31.708 "config": [] 00:04:31.708 }, 00:04:31.708 { 00:04:31.708 "subsystem": "iobuf", 00:04:31.708 "config": [ 00:04:31.708 { 00:04:31.708 "method": "iobuf_set_options", 00:04:31.708 "params": { 00:04:31.708 "small_pool_count": 8192, 00:04:31.708 "large_pool_count": 1024, 00:04:31.708 "small_bufsize": 8192, 00:04:31.708 "large_bufsize": 135168, 00:04:31.708 "enable_numa": false 00:04:31.708 } 00:04:31.708 } 00:04:31.708 ] 00:04:31.708 }, 00:04:31.708 { 00:04:31.708 "subsystem": "sock", 00:04:31.708 "config": [ 00:04:31.709 { 00:04:31.709 "method": "sock_set_default_impl", 00:04:31.709 "params": { 00:04:31.709 "impl_name": "posix" 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "sock_impl_set_options", 00:04:31.709 "params": { 00:04:31.709 "impl_name": "ssl", 00:04:31.709 "recv_buf_size": 4096, 00:04:31.709 "send_buf_size": 4096, 00:04:31.709 "enable_recv_pipe": true, 00:04:31.709 "enable_quickack": false, 00:04:31.709 "enable_placement_id": 0, 00:04:31.709 "enable_zerocopy_send_server": true, 00:04:31.709 "enable_zerocopy_send_client": false, 00:04:31.709 "zerocopy_threshold": 0, 00:04:31.709 "tls_version": 0, 00:04:31.709 "enable_ktls": false 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "sock_impl_set_options", 00:04:31.709 "params": { 00:04:31.709 "impl_name": "posix", 00:04:31.709 "recv_buf_size": 2097152, 00:04:31.709 "send_buf_size": 2097152, 00:04:31.709 "enable_recv_pipe": true, 00:04:31.709 "enable_quickack": false, 00:04:31.709 "enable_placement_id": 0, 00:04:31.709 "enable_zerocopy_send_server": true, 00:04:31.709 "enable_zerocopy_send_client": false, 00:04:31.709 "zerocopy_threshold": 0, 00:04:31.709 "tls_version": 0, 00:04:31.709 "enable_ktls": false 00:04:31.709 } 00:04:31.709 } 00:04:31.709 ] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "vmd", 00:04:31.709 "config": [] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "accel", 00:04:31.709 "config": [ 00:04:31.709 { 00:04:31.709 "method": "accel_set_options", 00:04:31.709 "params": { 00:04:31.709 "small_cache_size": 128, 00:04:31.709 "large_cache_size": 16, 00:04:31.709 "task_count": 2048, 00:04:31.709 "sequence_count": 2048, 00:04:31.709 "buf_count": 2048 00:04:31.709 } 00:04:31.709 } 00:04:31.709 ] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "bdev", 00:04:31.709 "config": [ 00:04:31.709 { 00:04:31.709 "method": "bdev_set_options", 00:04:31.709 "params": { 00:04:31.709 "bdev_io_pool_size": 65535, 00:04:31.709 "bdev_io_cache_size": 256, 00:04:31.709 "bdev_auto_examine": true, 00:04:31.709 "iobuf_small_cache_size": 128, 00:04:31.709 "iobuf_large_cache_size": 16 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "bdev_raid_set_options", 00:04:31.709 "params": { 00:04:31.709 "process_window_size_kb": 1024, 00:04:31.709 "process_max_bandwidth_mb_sec": 0 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "bdev_iscsi_set_options", 00:04:31.709 "params": { 00:04:31.709 "timeout_sec": 30 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "bdev_nvme_set_options", 00:04:31.709 "params": { 00:04:31.709 "action_on_timeout": "none", 00:04:31.709 "timeout_us": 0, 00:04:31.709 "timeout_admin_us": 0, 00:04:31.709 "keep_alive_timeout_ms": 10000, 00:04:31.709 "arbitration_burst": 0, 00:04:31.709 "low_priority_weight": 0, 00:04:31.709 "medium_priority_weight": 0, 00:04:31.709 "high_priority_weight": 0, 00:04:31.709 "nvme_adminq_poll_period_us": 10000, 00:04:31.709 "nvme_ioq_poll_period_us": 0, 00:04:31.709 "io_queue_requests": 0, 00:04:31.709 "delay_cmd_submit": true, 00:04:31.709 "transport_retry_count": 4, 00:04:31.709 "bdev_retry_count": 3, 00:04:31.709 "transport_ack_timeout": 0, 00:04:31.709 "ctrlr_loss_timeout_sec": 0, 00:04:31.709 "reconnect_delay_sec": 0, 00:04:31.709 "fast_io_fail_timeout_sec": 0, 00:04:31.709 "disable_auto_failback": false, 00:04:31.709 "generate_uuids": false, 00:04:31.709 "transport_tos": 0, 00:04:31.709 "nvme_error_stat": false, 00:04:31.709 "rdma_srq_size": 0, 00:04:31.709 "io_path_stat": false, 00:04:31.709 "allow_accel_sequence": false, 00:04:31.709 "rdma_max_cq_size": 0, 00:04:31.709 "rdma_cm_event_timeout_ms": 0, 00:04:31.709 "dhchap_digests": [ 00:04:31.709 "sha256", 00:04:31.709 "sha384", 00:04:31.709 "sha512" 00:04:31.709 ], 00:04:31.709 "dhchap_dhgroups": [ 00:04:31.709 "null", 00:04:31.709 "ffdhe2048", 00:04:31.709 "ffdhe3072", 00:04:31.709 "ffdhe4096", 00:04:31.709 "ffdhe6144", 00:04:31.709 "ffdhe8192" 00:04:31.709 ] 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "bdev_nvme_set_hotplug", 00:04:31.709 "params": { 00:04:31.709 "period_us": 100000, 00:04:31.709 "enable": false 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "bdev_wait_for_examine" 00:04:31.709 } 00:04:31.709 ] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "scsi", 00:04:31.709 "config": null 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "scheduler", 00:04:31.709 "config": [ 00:04:31.709 { 00:04:31.709 "method": "framework_set_scheduler", 00:04:31.709 "params": { 00:04:31.709 "name": "static" 00:04:31.709 } 00:04:31.709 } 00:04:31.709 ] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "vhost_scsi", 00:04:31.709 "config": [] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "vhost_blk", 00:04:31.709 "config": [] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "ublk", 00:04:31.709 "config": [] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "nbd", 00:04:31.709 "config": [] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "nvmf", 00:04:31.709 "config": [ 00:04:31.709 { 00:04:31.709 "method": "nvmf_set_config", 00:04:31.709 "params": { 00:04:31.709 "discovery_filter": "match_any", 00:04:31.709 "admin_cmd_passthru": { 00:04:31.709 "identify_ctrlr": false 00:04:31.709 }, 00:04:31.709 "dhchap_digests": [ 00:04:31.709 "sha256", 00:04:31.709 "sha384", 00:04:31.709 "sha512" 00:04:31.709 ], 00:04:31.709 "dhchap_dhgroups": [ 00:04:31.709 "null", 00:04:31.709 "ffdhe2048", 00:04:31.709 "ffdhe3072", 00:04:31.709 "ffdhe4096", 00:04:31.709 "ffdhe6144", 00:04:31.709 "ffdhe8192" 00:04:31.709 ] 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "nvmf_set_max_subsystems", 00:04:31.709 "params": { 00:04:31.709 "max_subsystems": 1024 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "nvmf_set_crdt", 00:04:31.709 "params": { 00:04:31.709 "crdt1": 0, 00:04:31.709 "crdt2": 0, 00:04:31.709 "crdt3": 0 00:04:31.709 } 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "method": "nvmf_create_transport", 00:04:31.709 "params": { 00:04:31.709 "trtype": "TCP", 00:04:31.709 "max_queue_depth": 128, 00:04:31.709 "max_io_qpairs_per_ctrlr": 127, 00:04:31.709 "in_capsule_data_size": 4096, 00:04:31.709 "max_io_size": 131072, 00:04:31.709 "io_unit_size": 131072, 00:04:31.709 "max_aq_depth": 128, 00:04:31.709 "num_shared_buffers": 511, 00:04:31.709 "buf_cache_size": 4294967295, 00:04:31.709 "dif_insert_or_strip": false, 00:04:31.709 "zcopy": false, 00:04:31.709 "c2h_success": true, 00:04:31.709 "sock_priority": 0, 00:04:31.709 "abort_timeout_sec": 1, 00:04:31.709 "ack_timeout": 0, 00:04:31.709 "data_wr_pool_size": 0 00:04:31.709 } 00:04:31.709 } 00:04:31.709 ] 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "subsystem": "iscsi", 00:04:31.709 "config": [ 00:04:31.709 { 00:04:31.709 "method": "iscsi_set_options", 00:04:31.709 "params": { 00:04:31.709 "node_base": "iqn.2016-06.io.spdk", 00:04:31.709 "max_sessions": 128, 00:04:31.709 "max_connections_per_session": 2, 00:04:31.709 "max_queue_depth": 64, 00:04:31.709 "default_time2wait": 2, 00:04:31.709 "default_time2retain": 20, 00:04:31.709 "first_burst_length": 8192, 00:04:31.709 "immediate_data": true, 00:04:31.709 "allow_duplicated_isid": false, 00:04:31.709 "error_recovery_level": 0, 00:04:31.709 "nop_timeout": 60, 00:04:31.709 "nop_in_interval": 30, 00:04:31.709 "disable_chap": false, 00:04:31.709 "require_chap": false, 00:04:31.709 "mutual_chap": false, 00:04:31.709 "chap_group": 0, 00:04:31.709 "max_large_datain_per_connection": 64, 00:04:31.709 "max_r2t_per_connection": 4, 00:04:31.709 "pdu_pool_size": 36864, 00:04:31.709 "immediate_data_pool_size": 16384, 00:04:31.709 "data_out_pool_size": 2048 00:04:31.709 } 00:04:31.709 } 00:04:31.709 ] 00:04:31.709 } 00:04:31.709 ] 00:04:31.709 } 00:04:31.709 08:04:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.709 08:04:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58061 00:04:31.709 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58061 ']' 00:04:31.709 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58061 00:04:31.709 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:31.709 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.709 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58061 00:04:31.709 killing process with pid 58061 00:04:31.710 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.710 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.710 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58061' 00:04:31.710 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58061 00:04:31.710 08:04:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58061 00:04:33.616 08:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58106 00:04:33.616 08:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.616 08:04:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58106 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58106 ']' 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58106 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58106 00:04:38.891 killing process with pid 58106 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58106' 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58106 00:04:38.891 08:04:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58106 00:04:40.270 08:04:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.270 08:04:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.270 ************************************ 00:04:40.270 END TEST skip_rpc_with_json 00:04:40.270 ************************************ 00:04:40.270 00:04:40.270 real 0m9.526s 00:04:40.270 user 0m9.272s 00:04:40.270 sys 0m0.628s 00:04:40.270 08:04:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.270 08:04:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.270 08:04:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.271 08:04:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.271 08:04:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.271 08:04:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.271 ************************************ 00:04:40.271 START TEST skip_rpc_with_delay 00:04:40.271 ************************************ 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.271 [2024-11-17 08:04:45.158161] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.271 00:04:40.271 real 0m0.189s 00:04:40.271 user 0m0.101s 00:04:40.271 sys 0m0.087s 00:04:40.271 ************************************ 00:04:40.271 END TEST skip_rpc_with_delay 00:04:40.271 ************************************ 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.271 08:04:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.271 08:04:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.271 08:04:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.271 08:04:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.271 08:04:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.271 08:04:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.271 08:04:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.271 ************************************ 00:04:40.271 START TEST exit_on_failed_rpc_init 00:04:40.271 ************************************ 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58229 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58229 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58229 ']' 00:04:40.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.271 08:04:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.530 [2024-11-17 08:04:45.398837] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:40.530 [2024-11-17 08:04:45.399017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:04:40.789 [2024-11-17 08:04:45.578585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.789 [2024-11-17 08:04:45.657390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.356 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.357 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.357 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.357 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.357 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.357 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.357 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.357 08:04:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.628 [2024-11-17 08:04:46.463645] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:41.628 [2024-11-17 08:04:46.463818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58247 ] 00:04:41.939 [2024-11-17 08:04:46.648440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.939 [2024-11-17 08:04:46.772629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.939 [2024-11-17 08:04:46.772762] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.939 [2024-11-17 08:04:46.772790] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.939 [2024-11-17 08:04:46.772816] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58229 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58229 ']' 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58229 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58229 00:04:42.207 killing process with pid 58229 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58229' 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58229 00:04:42.207 08:04:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58229 00:04:44.112 00:04:44.112 real 0m3.375s 00:04:44.112 user 0m3.896s 00:04:44.112 sys 0m0.507s 00:04:44.112 08:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.112 ************************************ 00:04:44.112 08:04:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.112 END TEST exit_on_failed_rpc_init 00:04:44.112 ************************************ 00:04:44.112 08:04:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.112 ************************************ 00:04:44.112 END TEST skip_rpc 00:04:44.112 ************************************ 00:04:44.112 00:04:44.112 real 0m20.152s 00:04:44.112 user 0m19.725s 00:04:44.112 sys 0m1.719s 00:04:44.112 08:04:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.112 08:04:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.112 08:04:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.112 08:04:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.112 08:04:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.112 08:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:44.112 ************************************ 00:04:44.112 START TEST rpc_client 00:04:44.112 ************************************ 00:04:44.112 08:04:48 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.112 * Looking for test storage... 00:04:44.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:44.112 08:04:48 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.112 08:04:48 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.112 08:04:48 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.112 08:04:48 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:44.112 08:04:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.113 08:04:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:44.113 08:04:48 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.113 08:04:48 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.113 --rc genhtml_branch_coverage=1 00:04:44.113 --rc genhtml_function_coverage=1 00:04:44.113 --rc genhtml_legend=1 00:04:44.113 --rc geninfo_all_blocks=1 00:04:44.113 --rc geninfo_unexecuted_blocks=1 00:04:44.113 00:04:44.113 ' 00:04:44.113 08:04:48 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.113 --rc genhtml_branch_coverage=1 00:04:44.113 --rc genhtml_function_coverage=1 00:04:44.113 --rc genhtml_legend=1 00:04:44.113 --rc geninfo_all_blocks=1 00:04:44.113 --rc geninfo_unexecuted_blocks=1 00:04:44.113 00:04:44.113 ' 00:04:44.113 08:04:48 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.113 --rc genhtml_branch_coverage=1 00:04:44.113 --rc genhtml_function_coverage=1 00:04:44.113 --rc genhtml_legend=1 00:04:44.113 --rc geninfo_all_blocks=1 00:04:44.113 --rc geninfo_unexecuted_blocks=1 00:04:44.113 00:04:44.113 ' 00:04:44.113 08:04:48 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.113 --rc genhtml_branch_coverage=1 00:04:44.113 --rc genhtml_function_coverage=1 00:04:44.113 --rc genhtml_legend=1 00:04:44.113 --rc geninfo_all_blocks=1 00:04:44.113 --rc geninfo_unexecuted_blocks=1 00:04:44.113 00:04:44.113 ' 00:04:44.113 08:04:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:44.113 OK 00:04:44.113 08:04:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.113 00:04:44.113 real 0m0.264s 00:04:44.113 user 0m0.156s 00:04:44.113 sys 0m0.112s 00:04:44.113 ************************************ 00:04:44.113 END TEST rpc_client 00:04:44.113 ************************************ 00:04:44.113 08:04:49 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.113 08:04:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.113 08:04:49 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.113 08:04:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.113 08:04:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.113 08:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.113 ************************************ 00:04:44.113 START TEST json_config 00:04:44.113 ************************************ 00:04:44.113 08:04:49 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.113 08:04:49 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.113 08:04:49 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.113 08:04:49 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.372 08:04:49 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.372 08:04:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.372 08:04:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.372 08:04:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.372 08:04:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.372 08:04:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.372 08:04:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.372 08:04:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.372 08:04:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.372 08:04:49 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.372 08:04:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.372 08:04:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.372 08:04:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.372 08:04:49 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.372 08:04:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.372 08:04:49 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.372 08:04:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.372 08:04:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.372 08:04:49 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.372 08:04:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.372 08:04:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.372 08:04:49 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.372 08:04:49 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.372 08:04:49 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.372 --rc genhtml_branch_coverage=1 00:04:44.372 --rc genhtml_function_coverage=1 00:04:44.372 --rc genhtml_legend=1 00:04:44.372 --rc geninfo_all_blocks=1 00:04:44.372 --rc geninfo_unexecuted_blocks=1 00:04:44.372 00:04:44.372 ' 00:04:44.372 08:04:49 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.372 --rc genhtml_branch_coverage=1 00:04:44.372 --rc genhtml_function_coverage=1 00:04:44.372 --rc genhtml_legend=1 00:04:44.372 --rc geninfo_all_blocks=1 00:04:44.372 --rc geninfo_unexecuted_blocks=1 00:04:44.372 00:04:44.372 ' 00:04:44.372 08:04:49 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.372 --rc genhtml_branch_coverage=1 00:04:44.372 --rc genhtml_function_coverage=1 00:04:44.372 --rc genhtml_legend=1 00:04:44.372 --rc geninfo_all_blocks=1 00:04:44.372 --rc geninfo_unexecuted_blocks=1 00:04:44.372 00:04:44.372 ' 00:04:44.372 08:04:49 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.372 --rc genhtml_branch_coverage=1 00:04:44.372 --rc genhtml_function_coverage=1 00:04:44.372 --rc genhtml_legend=1 00:04:44.372 --rc geninfo_all_blocks=1 00:04:44.372 --rc geninfo_unexecuted_blocks=1 00:04:44.372 00:04:44.372 ' 00:04:44.372 08:04:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f728c72-5b6a-4803-83fc-68787cb19dfd 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8f728c72-5b6a-4803-83fc-68787cb19dfd 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.372 08:04:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.372 08:04:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.372 08:04:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.372 08:04:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.372 08:04:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.372 08:04:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.372 08:04:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.372 08:04:49 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.372 08:04:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.372 08:04:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.373 08:04:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.373 08:04:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.373 08:04:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.373 08:04:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.373 08:04:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.373 08:04:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.373 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.373 08:04:49 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.373 08:04:49 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.373 ************************************ 00:04:44.373 END TEST json_config 00:04:44.373 ************************************ 00:04:44.373 00:04:44.373 real 0m0.193s 00:04:44.373 user 0m0.119s 00:04:44.373 sys 0m0.074s 00:04:44.373 08:04:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.373 08:04:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.373 08:04:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.373 08:04:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.373 08:04:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.373 08:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.373 ************************************ 00:04:44.373 START TEST json_config_extra_key 00:04:44.373 ************************************ 00:04:44.373 08:04:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.373 08:04:49 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.373 08:04:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.373 08:04:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.632 08:04:49 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:44.632 08:04:49 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.632 08:04:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.632 --rc genhtml_branch_coverage=1 00:04:44.632 --rc genhtml_function_coverage=1 00:04:44.632 --rc genhtml_legend=1 00:04:44.632 --rc geninfo_all_blocks=1 00:04:44.632 --rc geninfo_unexecuted_blocks=1 00:04:44.632 00:04:44.632 ' 00:04:44.632 08:04:49 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.632 --rc genhtml_branch_coverage=1 00:04:44.632 --rc genhtml_function_coverage=1 00:04:44.632 --rc genhtml_legend=1 00:04:44.632 --rc geninfo_all_blocks=1 00:04:44.632 --rc geninfo_unexecuted_blocks=1 00:04:44.632 00:04:44.632 ' 00:04:44.632 08:04:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.632 --rc genhtml_branch_coverage=1 00:04:44.632 --rc genhtml_function_coverage=1 00:04:44.632 --rc genhtml_legend=1 00:04:44.632 --rc geninfo_all_blocks=1 00:04:44.632 --rc geninfo_unexecuted_blocks=1 00:04:44.632 00:04:44.632 ' 00:04:44.632 08:04:49 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.632 --rc genhtml_branch_coverage=1 00:04:44.632 --rc genhtml_function_coverage=1 00:04:44.632 --rc genhtml_legend=1 00:04:44.632 --rc geninfo_all_blocks=1 00:04:44.632 --rc geninfo_unexecuted_blocks=1 00:04:44.632 00:04:44.632 ' 00:04:44.632 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8f728c72-5b6a-4803-83fc-68787cb19dfd 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8f728c72-5b6a-4803-83fc-68787cb19dfd 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.632 08:04:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.632 08:04:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.632 08:04:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.632 08:04:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.632 08:04:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.632 08:04:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:44.632 08:04:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.633 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.633 08:04:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.633 INFO: launching applications... 00:04:44.633 Waiting for target to run... 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.633 08:04:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58446 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58446 /var/tmp/spdk_tgt.sock 00:04:44.633 08:04:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.633 08:04:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58446 ']' 00:04:44.633 08:04:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.633 08:04:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.633 08:04:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.633 08:04:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.633 08:04:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.633 [2024-11-17 08:04:49.624626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:44.633 [2024-11-17 08:04:49.624984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58446 ] 00:04:45.201 [2024-11-17 08:04:50.006785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.201 [2024-11-17 08:04:50.122288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.769 08:04:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.769 08:04:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.769 00:04:45.769 08:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.769 INFO: shutting down applications... 00:04:45.769 08:04:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58446 ]] 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58446 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58446 00:04:45.769 08:04:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.338 08:04:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.338 08:04:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.338 08:04:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58446 00:04:46.338 08:04:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.905 08:04:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.905 08:04:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.905 08:04:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58446 00:04:46.905 08:04:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.165 08:04:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.165 08:04:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.165 08:04:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58446 00:04:47.165 08:04:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.733 08:04:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.733 08:04:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.733 08:04:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58446 00:04:47.733 08:04:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.733 08:04:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.733 08:04:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.733 SPDK target shutdown done 00:04:47.733 Success 00:04:47.733 08:04:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.733 08:04:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.733 00:04:47.733 real 0m3.360s 00:04:47.733 user 0m3.098s 00:04:47.733 sys 0m0.509s 00:04:47.733 ************************************ 00:04:47.733 END TEST json_config_extra_key 00:04:47.733 ************************************ 00:04:47.733 08:04:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.733 08:04:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.733 08:04:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.733 08:04:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.733 08:04:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.733 08:04:52 -- common/autotest_common.sh@10 -- # set +x 00:04:47.733 ************************************ 00:04:47.733 START TEST alias_rpc 00:04:47.733 ************************************ 00:04:47.733 08:04:52 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.993 * Looking for test storage... 00:04:47.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.993 08:04:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.993 --rc genhtml_branch_coverage=1 00:04:47.993 --rc genhtml_function_coverage=1 00:04:47.993 --rc genhtml_legend=1 00:04:47.993 --rc geninfo_all_blocks=1 00:04:47.993 --rc geninfo_unexecuted_blocks=1 00:04:47.993 00:04:47.993 ' 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.993 --rc genhtml_branch_coverage=1 00:04:47.993 --rc genhtml_function_coverage=1 00:04:47.993 --rc genhtml_legend=1 00:04:47.993 --rc geninfo_all_blocks=1 00:04:47.993 --rc geninfo_unexecuted_blocks=1 00:04:47.993 00:04:47.993 ' 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.993 --rc genhtml_branch_coverage=1 00:04:47.993 --rc genhtml_function_coverage=1 00:04:47.993 --rc genhtml_legend=1 00:04:47.993 --rc geninfo_all_blocks=1 00:04:47.993 --rc geninfo_unexecuted_blocks=1 00:04:47.993 00:04:47.993 ' 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.993 --rc genhtml_branch_coverage=1 00:04:47.993 --rc genhtml_function_coverage=1 00:04:47.993 --rc genhtml_legend=1 00:04:47.993 --rc geninfo_all_blocks=1 00:04:47.993 --rc geninfo_unexecuted_blocks=1 00:04:47.993 00:04:47.993 ' 00:04:47.993 08:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.993 08:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58544 00:04:47.993 08:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58544 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58544 ']' 00:04:47.993 08:04:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.993 08:04:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.252 [2024-11-17 08:04:53.026593] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:48.252 [2024-11-17 08:04:53.026773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58544 ] 00:04:48.252 [2024-11-17 08:04:53.206224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.511 [2024-11-17 08:04:53.286214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.079 08:04:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.079 08:04:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.079 08:04:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.338 08:04:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58544 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58544 ']' 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58544 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58544 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.338 killing process with pid 58544 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58544' 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@973 -- # kill 58544 00:04:49.338 08:04:54 alias_rpc -- common/autotest_common.sh@978 -- # wait 58544 00:04:51.245 ************************************ 00:04:51.245 END TEST alias_rpc 00:04:51.245 ************************************ 00:04:51.245 00:04:51.245 real 0m3.101s 00:04:51.245 user 0m3.294s 00:04:51.245 sys 0m0.457s 00:04:51.245 08:04:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.245 08:04:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.245 08:04:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:51.245 08:04:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.245 08:04:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.245 08:04:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.245 08:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:51.245 ************************************ 00:04:51.245 START TEST spdkcli_tcp 00:04:51.245 ************************************ 00:04:51.245 08:04:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.245 * Looking for test storage... 00:04:51.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:51.245 08:04:55 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.245 08:04:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.245 08:04:55 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.245 08:04:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.245 --rc genhtml_branch_coverage=1 00:04:51.245 --rc genhtml_function_coverage=1 00:04:51.245 --rc genhtml_legend=1 00:04:51.245 --rc geninfo_all_blocks=1 00:04:51.245 --rc geninfo_unexecuted_blocks=1 00:04:51.245 00:04:51.245 ' 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.245 --rc genhtml_branch_coverage=1 00:04:51.245 --rc genhtml_function_coverage=1 00:04:51.245 --rc genhtml_legend=1 00:04:51.245 --rc geninfo_all_blocks=1 00:04:51.245 --rc geninfo_unexecuted_blocks=1 00:04:51.245 00:04:51.245 ' 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.245 --rc genhtml_branch_coverage=1 00:04:51.245 --rc genhtml_function_coverage=1 00:04:51.245 --rc genhtml_legend=1 00:04:51.245 --rc geninfo_all_blocks=1 00:04:51.245 --rc geninfo_unexecuted_blocks=1 00:04:51.245 00:04:51.245 ' 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.245 --rc genhtml_branch_coverage=1 00:04:51.245 --rc genhtml_function_coverage=1 00:04:51.245 --rc genhtml_legend=1 00:04:51.245 --rc geninfo_all_blocks=1 00:04:51.245 --rc geninfo_unexecuted_blocks=1 00:04:51.245 00:04:51.245 ' 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58646 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58646 00:04:51.245 08:04:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58646 ']' 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.245 08:04:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.245 [2024-11-17 08:04:56.188156] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:51.245 [2024-11-17 08:04:56.188357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58646 ] 00:04:51.505 [2024-11-17 08:04:56.368213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.505 [2024-11-17 08:04:56.447617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.505 [2024-11-17 08:04:56.447636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.444 08:04:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.444 08:04:57 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:52.444 08:04:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:52.444 08:04:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58663 00:04:52.444 08:04:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.444 [ 00:04:52.444 "bdev_malloc_delete", 00:04:52.444 "bdev_malloc_create", 00:04:52.444 "bdev_null_resize", 00:04:52.444 "bdev_null_delete", 00:04:52.444 "bdev_null_create", 00:04:52.444 "bdev_nvme_cuse_unregister", 00:04:52.444 "bdev_nvme_cuse_register", 00:04:52.444 "bdev_opal_new_user", 00:04:52.444 "bdev_opal_set_lock_state", 00:04:52.444 "bdev_opal_delete", 00:04:52.444 "bdev_opal_get_info", 00:04:52.444 "bdev_opal_create", 00:04:52.444 "bdev_nvme_opal_revert", 00:04:52.444 "bdev_nvme_opal_init", 00:04:52.444 "bdev_nvme_send_cmd", 00:04:52.444 "bdev_nvme_set_keys", 00:04:52.444 "bdev_nvme_get_path_iostat", 00:04:52.444 "bdev_nvme_get_mdns_discovery_info", 00:04:52.444 "bdev_nvme_stop_mdns_discovery", 00:04:52.444 "bdev_nvme_start_mdns_discovery", 00:04:52.444 "bdev_nvme_set_multipath_policy", 00:04:52.444 "bdev_nvme_set_preferred_path", 00:04:52.444 "bdev_nvme_get_io_paths", 00:04:52.444 "bdev_nvme_remove_error_injection", 00:04:52.444 "bdev_nvme_add_error_injection", 00:04:52.444 "bdev_nvme_get_discovery_info", 00:04:52.444 "bdev_nvme_stop_discovery", 00:04:52.444 "bdev_nvme_start_discovery", 00:04:52.444 "bdev_nvme_get_controller_health_info", 00:04:52.444 "bdev_nvme_disable_controller", 00:04:52.444 "bdev_nvme_enable_controller", 00:04:52.444 "bdev_nvme_reset_controller", 00:04:52.444 "bdev_nvme_get_transport_statistics", 00:04:52.444 "bdev_nvme_apply_firmware", 00:04:52.444 "bdev_nvme_detach_controller", 00:04:52.444 "bdev_nvme_get_controllers", 00:04:52.444 "bdev_nvme_attach_controller", 00:04:52.444 "bdev_nvme_set_hotplug", 00:04:52.444 "bdev_nvme_set_options", 00:04:52.444 "bdev_passthru_delete", 00:04:52.444 "bdev_passthru_create", 00:04:52.444 "bdev_lvol_set_parent_bdev", 00:04:52.444 "bdev_lvol_set_parent", 00:04:52.444 "bdev_lvol_check_shallow_copy", 00:04:52.444 "bdev_lvol_start_shallow_copy", 00:04:52.444 "bdev_lvol_grow_lvstore", 00:04:52.444 "bdev_lvol_get_lvols", 00:04:52.444 "bdev_lvol_get_lvstores", 00:04:52.444 "bdev_lvol_delete", 00:04:52.444 "bdev_lvol_set_read_only", 00:04:52.444 "bdev_lvol_resize", 00:04:52.444 "bdev_lvol_decouple_parent", 00:04:52.444 "bdev_lvol_inflate", 00:04:52.444 "bdev_lvol_rename", 00:04:52.444 "bdev_lvol_clone_bdev", 00:04:52.444 "bdev_lvol_clone", 00:04:52.444 "bdev_lvol_snapshot", 00:04:52.444 "bdev_lvol_create", 00:04:52.444 "bdev_lvol_delete_lvstore", 00:04:52.444 "bdev_lvol_rename_lvstore", 00:04:52.444 "bdev_lvol_create_lvstore", 00:04:52.444 "bdev_raid_set_options", 00:04:52.444 "bdev_raid_remove_base_bdev", 00:04:52.444 "bdev_raid_add_base_bdev", 00:04:52.444 "bdev_raid_delete", 00:04:52.444 "bdev_raid_create", 00:04:52.444 "bdev_raid_get_bdevs", 00:04:52.444 "bdev_error_inject_error", 00:04:52.444 "bdev_error_delete", 00:04:52.444 "bdev_error_create", 00:04:52.444 "bdev_split_delete", 00:04:52.444 "bdev_split_create", 00:04:52.444 "bdev_delay_delete", 00:04:52.444 "bdev_delay_create", 00:04:52.444 "bdev_delay_update_latency", 00:04:52.444 "bdev_zone_block_delete", 00:04:52.444 "bdev_zone_block_create", 00:04:52.444 "blobfs_create", 00:04:52.444 "blobfs_detect", 00:04:52.444 "blobfs_set_cache_size", 00:04:52.444 "bdev_xnvme_delete", 00:04:52.444 "bdev_xnvme_create", 00:04:52.444 "bdev_aio_delete", 00:04:52.444 "bdev_aio_rescan", 00:04:52.444 "bdev_aio_create", 00:04:52.444 "bdev_ftl_set_property", 00:04:52.444 "bdev_ftl_get_properties", 00:04:52.444 "bdev_ftl_get_stats", 00:04:52.444 "bdev_ftl_unmap", 00:04:52.444 "bdev_ftl_unload", 00:04:52.444 "bdev_ftl_delete", 00:04:52.444 "bdev_ftl_load", 00:04:52.444 "bdev_ftl_create", 00:04:52.444 "bdev_virtio_attach_controller", 00:04:52.444 "bdev_virtio_scsi_get_devices", 00:04:52.444 "bdev_virtio_detach_controller", 00:04:52.444 "bdev_virtio_blk_set_hotplug", 00:04:52.444 "bdev_iscsi_delete", 00:04:52.444 "bdev_iscsi_create", 00:04:52.444 "bdev_iscsi_set_options", 00:04:52.444 "accel_error_inject_error", 00:04:52.444 "ioat_scan_accel_module", 00:04:52.444 "dsa_scan_accel_module", 00:04:52.444 "iaa_scan_accel_module", 00:04:52.444 "keyring_file_remove_key", 00:04:52.444 "keyring_file_add_key", 00:04:52.444 "keyring_linux_set_options", 00:04:52.444 "fsdev_aio_delete", 00:04:52.444 "fsdev_aio_create", 00:04:52.444 "iscsi_get_histogram", 00:04:52.444 "iscsi_enable_histogram", 00:04:52.444 "iscsi_set_options", 00:04:52.444 "iscsi_get_auth_groups", 00:04:52.444 "iscsi_auth_group_remove_secret", 00:04:52.444 "iscsi_auth_group_add_secret", 00:04:52.444 "iscsi_delete_auth_group", 00:04:52.444 "iscsi_create_auth_group", 00:04:52.444 "iscsi_set_discovery_auth", 00:04:52.444 "iscsi_get_options", 00:04:52.444 "iscsi_target_node_request_logout", 00:04:52.444 "iscsi_target_node_set_redirect", 00:04:52.444 "iscsi_target_node_set_auth", 00:04:52.444 "iscsi_target_node_add_lun", 00:04:52.444 "iscsi_get_stats", 00:04:52.444 "iscsi_get_connections", 00:04:52.444 "iscsi_portal_group_set_auth", 00:04:52.444 "iscsi_start_portal_group", 00:04:52.444 "iscsi_delete_portal_group", 00:04:52.444 "iscsi_create_portal_group", 00:04:52.444 "iscsi_get_portal_groups", 00:04:52.444 "iscsi_delete_target_node", 00:04:52.444 "iscsi_target_node_remove_pg_ig_maps", 00:04:52.444 "iscsi_target_node_add_pg_ig_maps", 00:04:52.444 "iscsi_create_target_node", 00:04:52.444 "iscsi_get_target_nodes", 00:04:52.444 "iscsi_delete_initiator_group", 00:04:52.444 "iscsi_initiator_group_remove_initiators", 00:04:52.444 "iscsi_initiator_group_add_initiators", 00:04:52.444 "iscsi_create_initiator_group", 00:04:52.444 "iscsi_get_initiator_groups", 00:04:52.444 "nvmf_set_crdt", 00:04:52.444 "nvmf_set_config", 00:04:52.444 "nvmf_set_max_subsystems", 00:04:52.444 "nvmf_stop_mdns_prr", 00:04:52.444 "nvmf_publish_mdns_prr", 00:04:52.444 "nvmf_subsystem_get_listeners", 00:04:52.444 "nvmf_subsystem_get_qpairs", 00:04:52.444 "nvmf_subsystem_get_controllers", 00:04:52.444 "nvmf_get_stats", 00:04:52.444 "nvmf_get_transports", 00:04:52.444 "nvmf_create_transport", 00:04:52.444 "nvmf_get_targets", 00:04:52.444 "nvmf_delete_target", 00:04:52.444 "nvmf_create_target", 00:04:52.444 "nvmf_subsystem_allow_any_host", 00:04:52.444 "nvmf_subsystem_set_keys", 00:04:52.444 "nvmf_subsystem_remove_host", 00:04:52.444 "nvmf_subsystem_add_host", 00:04:52.444 "nvmf_ns_remove_host", 00:04:52.444 "nvmf_ns_add_host", 00:04:52.444 "nvmf_subsystem_remove_ns", 00:04:52.444 "nvmf_subsystem_set_ns_ana_group", 00:04:52.444 "nvmf_subsystem_add_ns", 00:04:52.444 "nvmf_subsystem_listener_set_ana_state", 00:04:52.444 "nvmf_discovery_get_referrals", 00:04:52.444 "nvmf_discovery_remove_referral", 00:04:52.444 "nvmf_discovery_add_referral", 00:04:52.444 "nvmf_subsystem_remove_listener", 00:04:52.444 "nvmf_subsystem_add_listener", 00:04:52.444 "nvmf_delete_subsystem", 00:04:52.444 "nvmf_create_subsystem", 00:04:52.444 "nvmf_get_subsystems", 00:04:52.444 "env_dpdk_get_mem_stats", 00:04:52.444 "nbd_get_disks", 00:04:52.444 "nbd_stop_disk", 00:04:52.444 "nbd_start_disk", 00:04:52.444 "ublk_recover_disk", 00:04:52.444 "ublk_get_disks", 00:04:52.444 "ublk_stop_disk", 00:04:52.444 "ublk_start_disk", 00:04:52.444 "ublk_destroy_target", 00:04:52.444 "ublk_create_target", 00:04:52.444 "virtio_blk_create_transport", 00:04:52.444 "virtio_blk_get_transports", 00:04:52.444 "vhost_controller_set_coalescing", 00:04:52.444 "vhost_get_controllers", 00:04:52.444 "vhost_delete_controller", 00:04:52.444 "vhost_create_blk_controller", 00:04:52.444 "vhost_scsi_controller_remove_target", 00:04:52.444 "vhost_scsi_controller_add_target", 00:04:52.444 "vhost_start_scsi_controller", 00:04:52.444 "vhost_create_scsi_controller", 00:04:52.444 "thread_set_cpumask", 00:04:52.444 "scheduler_set_options", 00:04:52.444 "framework_get_governor", 00:04:52.444 "framework_get_scheduler", 00:04:52.444 "framework_set_scheduler", 00:04:52.444 "framework_get_reactors", 00:04:52.444 "thread_get_io_channels", 00:04:52.444 "thread_get_pollers", 00:04:52.444 "thread_get_stats", 00:04:52.444 "framework_monitor_context_switch", 00:04:52.444 "spdk_kill_instance", 00:04:52.444 "log_enable_timestamps", 00:04:52.444 "log_get_flags", 00:04:52.444 "log_clear_flag", 00:04:52.444 "log_set_flag", 00:04:52.444 "log_get_level", 00:04:52.445 "log_set_level", 00:04:52.445 "log_get_print_level", 00:04:52.445 "log_set_print_level", 00:04:52.445 "framework_enable_cpumask_locks", 00:04:52.445 "framework_disable_cpumask_locks", 00:04:52.445 "framework_wait_init", 00:04:52.445 "framework_start_init", 00:04:52.445 "scsi_get_devices", 00:04:52.445 "bdev_get_histogram", 00:04:52.445 "bdev_enable_histogram", 00:04:52.445 "bdev_set_qos_limit", 00:04:52.445 "bdev_set_qd_sampling_period", 00:04:52.445 "bdev_get_bdevs", 00:04:52.445 "bdev_reset_iostat", 00:04:52.445 "bdev_get_iostat", 00:04:52.445 "bdev_examine", 00:04:52.445 "bdev_wait_for_examine", 00:04:52.445 "bdev_set_options", 00:04:52.445 "accel_get_stats", 00:04:52.445 "accel_set_options", 00:04:52.445 "accel_set_driver", 00:04:52.445 "accel_crypto_key_destroy", 00:04:52.445 "accel_crypto_keys_get", 00:04:52.445 "accel_crypto_key_create", 00:04:52.445 "accel_assign_opc", 00:04:52.445 "accel_get_module_info", 00:04:52.445 "accel_get_opc_assignments", 00:04:52.445 "vmd_rescan", 00:04:52.445 "vmd_remove_device", 00:04:52.445 "vmd_enable", 00:04:52.445 "sock_get_default_impl", 00:04:52.445 "sock_set_default_impl", 00:04:52.445 "sock_impl_set_options", 00:04:52.445 "sock_impl_get_options", 00:04:52.445 "iobuf_get_stats", 00:04:52.445 "iobuf_set_options", 00:04:52.445 "keyring_get_keys", 00:04:52.445 "framework_get_pci_devices", 00:04:52.445 "framework_get_config", 00:04:52.445 "framework_get_subsystems", 00:04:52.445 "fsdev_set_opts", 00:04:52.445 "fsdev_get_opts", 00:04:52.445 "trace_get_info", 00:04:52.445 "trace_get_tpoint_group_mask", 00:04:52.445 "trace_disable_tpoint_group", 00:04:52.445 "trace_enable_tpoint_group", 00:04:52.445 "trace_clear_tpoint_mask", 00:04:52.445 "trace_set_tpoint_mask", 00:04:52.445 "notify_get_notifications", 00:04:52.445 "notify_get_types", 00:04:52.445 "spdk_get_version", 00:04:52.445 "rpc_get_methods" 00:04:52.445 ] 00:04:52.445 08:04:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:52.445 08:04:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.445 08:04:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.704 08:04:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.704 08:04:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58646 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58646 ']' 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58646 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58646 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.704 killing process with pid 58646 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58646' 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58646 00:04:52.704 08:04:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58646 00:04:54.609 ************************************ 00:04:54.609 END TEST spdkcli_tcp 00:04:54.609 ************************************ 00:04:54.609 00:04:54.609 real 0m3.284s 00:04:54.609 user 0m6.030s 00:04:54.609 sys 0m0.507s 00:04:54.609 08:04:59 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.609 08:04:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.609 08:04:59 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.609 08:04:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.609 08:04:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.609 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:54.609 ************************************ 00:04:54.609 START TEST dpdk_mem_utility 00:04:54.609 ************************************ 00:04:54.609 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:54.609 * Looking for test storage... 00:04:54.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:54.609 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.609 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.609 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.609 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.609 08:04:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.610 08:04:59 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.610 --rc genhtml_branch_coverage=1 00:04:54.610 --rc genhtml_function_coverage=1 00:04:54.610 --rc genhtml_legend=1 00:04:54.610 --rc geninfo_all_blocks=1 00:04:54.610 --rc geninfo_unexecuted_blocks=1 00:04:54.610 00:04:54.610 ' 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.610 --rc genhtml_branch_coverage=1 00:04:54.610 --rc genhtml_function_coverage=1 00:04:54.610 --rc genhtml_legend=1 00:04:54.610 --rc geninfo_all_blocks=1 00:04:54.610 --rc geninfo_unexecuted_blocks=1 00:04:54.610 00:04:54.610 ' 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.610 --rc genhtml_branch_coverage=1 00:04:54.610 --rc genhtml_function_coverage=1 00:04:54.610 --rc genhtml_legend=1 00:04:54.610 --rc geninfo_all_blocks=1 00:04:54.610 --rc geninfo_unexecuted_blocks=1 00:04:54.610 00:04:54.610 ' 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.610 --rc genhtml_branch_coverage=1 00:04:54.610 --rc genhtml_function_coverage=1 00:04:54.610 --rc genhtml_legend=1 00:04:54.610 --rc geninfo_all_blocks=1 00:04:54.610 --rc geninfo_unexecuted_blocks=1 00:04:54.610 00:04:54.610 ' 00:04:54.610 08:04:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:54.610 08:04:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58757 00:04:54.610 08:04:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.610 08:04:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58757 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58757 ']' 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.610 08:04:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.610 [2024-11-17 08:04:59.515502] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:54.610 [2024-11-17 08:04:59.515685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58757 ] 00:04:54.869 [2024-11-17 08:04:59.695178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.869 [2024-11-17 08:04:59.775738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.806 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.806 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:55.806 08:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:55.806 08:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:55.806 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.806 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.806 { 00:04:55.806 "filename": "/tmp/spdk_mem_dump.txt" 00:04:55.806 } 00:04:55.806 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.806 08:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:55.806 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:55.806 1 heaps totaling size 816.000000 MiB 00:04:55.806 size: 816.000000 MiB heap id: 0 00:04:55.806 end heaps---------- 00:04:55.806 9 mempools totaling size 595.772034 MiB 00:04:55.806 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:55.806 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:55.806 size: 92.545471 MiB name: bdev_io_58757 00:04:55.806 size: 50.003479 MiB name: msgpool_58757 00:04:55.806 size: 36.509338 MiB name: fsdev_io_58757 00:04:55.806 size: 21.763794 MiB name: PDU_Pool 00:04:55.806 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:55.806 size: 4.133484 MiB name: evtpool_58757 00:04:55.806 size: 0.026123 MiB name: Session_Pool 00:04:55.806 end mempools------- 00:04:55.806 6 memzones totaling size 4.142822 MiB 00:04:55.806 size: 1.000366 MiB name: RG_ring_0_58757 00:04:55.806 size: 1.000366 MiB name: RG_ring_1_58757 00:04:55.806 size: 1.000366 MiB name: RG_ring_4_58757 00:04:55.806 size: 1.000366 MiB name: RG_ring_5_58757 00:04:55.806 size: 0.125366 MiB name: RG_ring_2_58757 00:04:55.806 size: 0.015991 MiB name: RG_ring_3_58757 00:04:55.806 end memzones------- 00:04:55.806 08:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:55.806 heap id: 0 total size: 816.000000 MiB number of busy elements: 313 number of free elements: 18 00:04:55.806 list of free elements. size: 16.791870 MiB 00:04:55.806 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:55.806 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:55.806 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:55.806 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:55.806 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:55.806 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:55.806 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:55.806 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:55.806 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:55.806 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:55.806 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:55.806 element at address: 0x20001ac00000 with size: 0.562439 MiB 00:04:55.806 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:55.806 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:55.806 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:55.806 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:55.806 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:55.806 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:55.806 list of standard malloc elements. size: 199.287231 MiB 00:04:55.806 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:55.806 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:55.806 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:55.806 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:55.806 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:55.806 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:55.806 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:55.806 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:55.806 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:55.806 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:55.806 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:55.806 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:55.806 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:55.806 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:55.806 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:55.806 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:55.806 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:55.806 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:55.806 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:55.807 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:55.808 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:55.808 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:55.808 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:55.809 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:55.809 list of memzone associated elements. size: 599.920898 MiB 00:04:55.809 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:55.809 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:55.809 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:55.809 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:55.809 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:55.809 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58757_0 00:04:55.809 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:55.809 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58757_0 00:04:55.809 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:55.809 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58757_0 00:04:55.809 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:55.809 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:55.809 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:55.809 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:55.809 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:55.809 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58757_0 00:04:55.809 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:55.809 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58757 00:04:55.809 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:55.809 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58757 00:04:55.809 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:55.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:55.809 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:55.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:55.809 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:55.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:55.809 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:55.809 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:55.809 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:55.809 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58757 00:04:55.809 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:55.809 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58757 00:04:55.809 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:55.809 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58757 00:04:55.809 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:55.809 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58757 00:04:55.809 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:55.809 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58757 00:04:55.809 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:55.809 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58757 00:04:55.809 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:55.809 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:55.809 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:55.809 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:55.809 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:55.809 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:55.809 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:55.809 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58757 00:04:55.809 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:55.809 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58757 00:04:55.809 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:55.809 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:55.809 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:55.809 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:55.809 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:55.809 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58757 00:04:55.809 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:55.809 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:55.809 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:55.809 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58757 00:04:55.809 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:55.809 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58757 00:04:55.809 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:55.809 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58757 00:04:55.809 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:55.809 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:55.809 08:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:55.809 08:05:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58757 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58757 ']' 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58757 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58757 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.809 killing process with pid 58757 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58757' 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58757 00:04:55.809 08:05:00 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58757 00:04:57.715 00:04:57.715 real 0m3.069s 00:04:57.715 user 0m3.244s 00:04:57.715 sys 0m0.458s 00:04:57.715 08:05:02 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.715 08:05:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.715 ************************************ 00:04:57.715 END TEST dpdk_mem_utility 00:04:57.715 ************************************ 00:04:57.715 08:05:02 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.715 08:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.715 08:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.715 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:57.715 ************************************ 00:04:57.715 START TEST event 00:04:57.715 ************************************ 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.715 * Looking for test storage... 00:04:57.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.715 08:05:02 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.715 08:05:02 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.715 08:05:02 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.715 08:05:02 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.715 08:05:02 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.715 08:05:02 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.715 08:05:02 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.715 08:05:02 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.715 08:05:02 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.715 08:05:02 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.715 08:05:02 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.715 08:05:02 event -- scripts/common.sh@344 -- # case "$op" in 00:04:57.715 08:05:02 event -- scripts/common.sh@345 -- # : 1 00:04:57.715 08:05:02 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.715 08:05:02 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.715 08:05:02 event -- scripts/common.sh@365 -- # decimal 1 00:04:57.715 08:05:02 event -- scripts/common.sh@353 -- # local d=1 00:04:57.715 08:05:02 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.715 08:05:02 event -- scripts/common.sh@355 -- # echo 1 00:04:57.715 08:05:02 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.715 08:05:02 event -- scripts/common.sh@366 -- # decimal 2 00:04:57.715 08:05:02 event -- scripts/common.sh@353 -- # local d=2 00:04:57.715 08:05:02 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.715 08:05:02 event -- scripts/common.sh@355 -- # echo 2 00:04:57.715 08:05:02 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.715 08:05:02 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.715 08:05:02 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.715 08:05:02 event -- scripts/common.sh@368 -- # return 0 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.715 --rc genhtml_branch_coverage=1 00:04:57.715 --rc genhtml_function_coverage=1 00:04:57.715 --rc genhtml_legend=1 00:04:57.715 --rc geninfo_all_blocks=1 00:04:57.715 --rc geninfo_unexecuted_blocks=1 00:04:57.715 00:04:57.715 ' 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.715 --rc genhtml_branch_coverage=1 00:04:57.715 --rc genhtml_function_coverage=1 00:04:57.715 --rc genhtml_legend=1 00:04:57.715 --rc geninfo_all_blocks=1 00:04:57.715 --rc geninfo_unexecuted_blocks=1 00:04:57.715 00:04:57.715 ' 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.715 --rc genhtml_branch_coverage=1 00:04:57.715 --rc genhtml_function_coverage=1 00:04:57.715 --rc genhtml_legend=1 00:04:57.715 --rc geninfo_all_blocks=1 00:04:57.715 --rc geninfo_unexecuted_blocks=1 00:04:57.715 00:04:57.715 ' 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.715 --rc genhtml_branch_coverage=1 00:04:57.715 --rc genhtml_function_coverage=1 00:04:57.715 --rc genhtml_legend=1 00:04:57.715 --rc geninfo_all_blocks=1 00:04:57.715 --rc geninfo_unexecuted_blocks=1 00:04:57.715 00:04:57.715 ' 00:04:57.715 08:05:02 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.715 08:05:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.715 08:05:02 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:57.715 08:05:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.715 08:05:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.715 ************************************ 00:04:57.715 START TEST event_perf 00:04:57.715 ************************************ 00:04:57.715 08:05:02 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.715 Running I/O for 1 seconds...[2024-11-17 08:05:02.565590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:57.715 [2024-11-17 08:05:02.565760] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58854 ] 00:04:57.974 [2024-11-17 08:05:02.739947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.974 [2024-11-17 08:05:02.822337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.974 [2024-11-17 08:05:02.822502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.974 [2024-11-17 08:05:02.822569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.974 [2024-11-17 08:05:02.822572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.353 Running I/O for 1 seconds... 00:04:59.353 lcore 0: 211825 00:04:59.353 lcore 1: 211824 00:04:59.353 lcore 2: 211825 00:04:59.353 lcore 3: 211826 00:04:59.354 done. 00:04:59.354 00:04:59.354 real 0m1.488s 00:04:59.354 user 0m4.275s 00:04:59.354 sys 0m0.094s 00:04:59.354 08:05:04 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.354 08:05:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.354 ************************************ 00:04:59.354 END TEST event_perf 00:04:59.354 ************************************ 00:04:59.354 08:05:04 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.354 08:05:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:59.354 08:05:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.354 08:05:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.354 ************************************ 00:04:59.354 START TEST event_reactor 00:04:59.354 ************************************ 00:04:59.354 08:05:04 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.354 [2024-11-17 08:05:04.099016] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:59.354 [2024-11-17 08:05:04.099219] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ] 00:04:59.354 [2024-11-17 08:05:04.274844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.354 [2024-11-17 08:05:04.353976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.729 test_start 00:05:00.729 oneshot 00:05:00.729 tick 100 00:05:00.729 tick 100 00:05:00.729 tick 250 00:05:00.729 tick 100 00:05:00.729 tick 100 00:05:00.729 tick 100 00:05:00.729 tick 250 00:05:00.729 tick 500 00:05:00.729 tick 100 00:05:00.729 tick 100 00:05:00.729 tick 250 00:05:00.729 tick 100 00:05:00.729 tick 100 00:05:00.729 test_end 00:05:00.729 00:05:00.729 real 0m1.471s 00:05:00.729 user 0m1.272s 00:05:00.729 sys 0m0.092s 00:05:00.729 08:05:05 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.729 08:05:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.729 ************************************ 00:05:00.729 END TEST event_reactor 00:05:00.729 ************************************ 00:05:00.729 08:05:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.729 08:05:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.729 08:05:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.729 08:05:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.729 ************************************ 00:05:00.729 START TEST event_reactor_perf 00:05:00.729 ************************************ 00:05:00.729 08:05:05 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.729 [2024-11-17 08:05:05.627771] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:00.729 [2024-11-17 08:05:05.627937] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:05:00.988 [2024-11-17 08:05:05.805794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.988 [2024-11-17 08:05:05.886908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.364 test_start 00:05:02.364 test_end 00:05:02.364 Performance: 360616 events per second 00:05:02.364 00:05:02.364 real 0m1.484s 00:05:02.364 user 0m1.288s 00:05:02.364 sys 0m0.089s 00:05:02.364 08:05:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.364 08:05:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.364 ************************************ 00:05:02.364 END TEST event_reactor_perf 00:05:02.364 ************************************ 00:05:02.364 08:05:07 event -- event/event.sh@49 -- # uname -s 00:05:02.364 08:05:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.364 08:05:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.364 08:05:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.364 08:05:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.364 08:05:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.364 ************************************ 00:05:02.364 START TEST event_scheduler 00:05:02.364 ************************************ 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.364 * Looking for test storage... 00:05:02.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.364 08:05:07 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.364 --rc genhtml_branch_coverage=1 00:05:02.364 --rc genhtml_function_coverage=1 00:05:02.364 --rc genhtml_legend=1 00:05:02.364 --rc geninfo_all_blocks=1 00:05:02.364 --rc geninfo_unexecuted_blocks=1 00:05:02.364 00:05:02.364 ' 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.364 --rc genhtml_branch_coverage=1 00:05:02.364 --rc genhtml_function_coverage=1 00:05:02.364 --rc genhtml_legend=1 00:05:02.364 --rc geninfo_all_blocks=1 00:05:02.364 --rc geninfo_unexecuted_blocks=1 00:05:02.364 00:05:02.364 ' 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.364 --rc genhtml_branch_coverage=1 00:05:02.364 --rc genhtml_function_coverage=1 00:05:02.364 --rc genhtml_legend=1 00:05:02.364 --rc geninfo_all_blocks=1 00:05:02.364 --rc geninfo_unexecuted_blocks=1 00:05:02.364 00:05:02.364 ' 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.364 --rc genhtml_branch_coverage=1 00:05:02.364 --rc genhtml_function_coverage=1 00:05:02.364 --rc genhtml_legend=1 00:05:02.364 --rc geninfo_all_blocks=1 00:05:02.364 --rc geninfo_unexecuted_blocks=1 00:05:02.364 00:05:02.364 ' 00:05:02.364 08:05:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.364 08:05:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59006 00:05:02.364 08:05:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.364 08:05:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.364 08:05:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59006 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59006 ']' 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.364 08:05:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.624 [2024-11-17 08:05:07.416794] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:02.624 [2024-11-17 08:05:07.416985] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59006 ] 00:05:02.624 [2024-11-17 08:05:07.595629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.886 [2024-11-17 08:05:07.683363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.886 [2024-11-17 08:05:07.683523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.886 [2024-11-17 08:05:07.683583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.886 [2024-11-17 08:05:07.683793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.454 08:05:08 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.454 08:05:08 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:03.454 08:05:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.454 08:05:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.454 08:05:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.454 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.454 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.454 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.454 POWER: Cannot set governor of lcore 0 to performance 00:05:03.454 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.454 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.454 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.454 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.454 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:03.454 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:03.454 POWER: Unable to set Power Management Environment for lcore 0 00:05:03.454 [2024-11-17 08:05:08.406546] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:03.454 [2024-11-17 08:05:08.406637] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:03.454 [2024-11-17 08:05:08.406731] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.454 [2024-11-17 08:05:08.406833] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.454 [2024-11-17 08:05:08.406900] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.454 [2024-11-17 08:05:08.406972] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.454 08:05:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.454 08:05:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.454 08:05:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.454 08:05:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 [2024-11-17 08:05:08.629781] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.715 08:05:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.715 08:05:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.715 08:05:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 ************************************ 00:05:03.715 START TEST scheduler_create_thread 00:05:03.715 ************************************ 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 2 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 3 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 4 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 5 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 6 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 7 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 8 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.715 9 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.715 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.975 10 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.975 08:05:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.351 08:05:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.351 08:05:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:05.351 08:05:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:05.351 08:05:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.351 08:05:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.287 ************************************ 00:05:06.287 END TEST scheduler_create_thread 00:05:06.287 ************************************ 00:05:06.287 08:05:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.287 00:05:06.287 real 0m2.617s 00:05:06.287 user 0m0.020s 00:05:06.287 sys 0m0.006s 00:05:06.287 08:05:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.287 08:05:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.547 08:05:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:06.547 08:05:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59006 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59006 ']' 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59006 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59006 00:05:06.547 killing process with pid 59006 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59006' 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59006 00:05:06.547 08:05:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59006 00:05:06.806 [2024-11-17 08:05:11.741744] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:07.744 00:05:07.744 real 0m5.427s 00:05:07.744 user 0m9.922s 00:05:07.744 sys 0m0.410s 00:05:07.744 08:05:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.744 ************************************ 00:05:07.744 END TEST event_scheduler 00:05:07.744 ************************************ 00:05:07.744 08:05:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.744 08:05:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:07.744 08:05:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:07.744 08:05:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.744 08:05:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.744 08:05:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.744 ************************************ 00:05:07.745 START TEST app_repeat 00:05:07.745 ************************************ 00:05:07.745 08:05:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59112 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:07.745 Process app_repeat pid: 59112 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59112' 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:07.745 spdk_app_start Round 0 00:05:07.745 08:05:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59112 /var/tmp/spdk-nbd.sock 00:05:07.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.745 08:05:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:05:07.745 08:05:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.745 08:05:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.745 08:05:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.745 08:05:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.745 08:05:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.745 [2024-11-17 08:05:12.667758] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:07.745 [2024-11-17 08:05:12.667920] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59112 ] 00:05:08.004 [2024-11-17 08:05:12.847272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.004 [2024-11-17 08:05:12.930910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.004 [2024-11-17 08:05:12.930925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.941 08:05:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.941 08:05:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.941 08:05:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.200 Malloc0 00:05:09.200 08:05:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.459 Malloc1 00:05:09.459 08:05:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.459 08:05:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.718 /dev/nbd0 00:05:09.718 08:05:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.718 08:05:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.718 1+0 records in 00:05:09.718 1+0 records out 00:05:09.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289833 s, 14.1 MB/s 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.718 08:05:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.718 08:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.718 08:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.718 08:05:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.977 /dev/nbd1 00:05:09.977 08:05:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.977 08:05:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:09.977 08:05:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:09.978 08:05:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.978 1+0 records in 00:05:09.978 1+0 records out 00:05:09.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312715 s, 13.1 MB/s 00:05:09.978 08:05:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.978 08:05:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:09.978 08:05:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.978 08:05:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:09.978 08:05:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:09.978 08:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.978 08:05:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.978 08:05:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.978 08:05:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.978 08:05:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.237 { 00:05:10.237 "nbd_device": "/dev/nbd0", 00:05:10.237 "bdev_name": "Malloc0" 00:05:10.237 }, 00:05:10.237 { 00:05:10.237 "nbd_device": "/dev/nbd1", 00:05:10.237 "bdev_name": "Malloc1" 00:05:10.237 } 00:05:10.237 ]' 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.237 { 00:05:10.237 "nbd_device": "/dev/nbd0", 00:05:10.237 "bdev_name": "Malloc0" 00:05:10.237 }, 00:05:10.237 { 00:05:10.237 "nbd_device": "/dev/nbd1", 00:05:10.237 "bdev_name": "Malloc1" 00:05:10.237 } 00:05:10.237 ]' 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.237 /dev/nbd1' 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.237 /dev/nbd1' 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.237 256+0 records in 00:05:10.237 256+0 records out 00:05:10.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481801 s, 218 MB/s 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.237 256+0 records in 00:05:10.237 256+0 records out 00:05:10.237 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265354 s, 39.5 MB/s 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.237 08:05:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.496 256+0 records in 00:05:10.496 256+0 records out 00:05:10.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278509 s, 37.6 MB/s 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.496 08:05:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.755 08:05:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.014 08:05:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.274 08:05:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.274 08:05:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.842 08:05:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.447 [2024-11-17 08:05:17.353014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.447 [2024-11-17 08:05:17.431250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.447 [2024-11-17 08:05:17.431257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.724 [2024-11-17 08:05:17.571795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.724 [2024-11-17 08:05:17.571927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.651 spdk_app_start Round 1 00:05:14.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.651 08:05:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.651 08:05:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:14.651 08:05:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59112 /var/tmp/spdk-nbd.sock 00:05:14.651 08:05:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:05:14.651 08:05:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.651 08:05:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.651 08:05:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.651 08:05:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.651 08:05:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.910 08:05:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.910 08:05:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.910 08:05:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.169 Malloc0 00:05:15.169 08:05:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.429 Malloc1 00:05:15.429 08:05:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.429 08:05:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.688 /dev/nbd0 00:05:15.947 08:05:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.947 08:05:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.947 08:05:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.948 1+0 records in 00:05:15.948 1+0 records out 00:05:15.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312675 s, 13.1 MB/s 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.948 08:05:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.948 08:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.948 08:05:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.948 08:05:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.207 /dev/nbd1 00:05:16.207 08:05:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.207 08:05:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.207 1+0 records in 00:05:16.207 1+0 records out 00:05:16.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346624 s, 11.8 MB/s 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.207 08:05:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.207 08:05:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.207 08:05:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.207 08:05:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.207 08:05:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.207 08:05:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.467 { 00:05:16.467 "nbd_device": "/dev/nbd0", 00:05:16.467 "bdev_name": "Malloc0" 00:05:16.467 }, 00:05:16.467 { 00:05:16.467 "nbd_device": "/dev/nbd1", 00:05:16.467 "bdev_name": "Malloc1" 00:05:16.467 } 00:05:16.467 ]' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.467 { 00:05:16.467 "nbd_device": "/dev/nbd0", 00:05:16.467 "bdev_name": "Malloc0" 00:05:16.467 }, 00:05:16.467 { 00:05:16.467 "nbd_device": "/dev/nbd1", 00:05:16.467 "bdev_name": "Malloc1" 00:05:16.467 } 00:05:16.467 ]' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.467 /dev/nbd1' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.467 /dev/nbd1' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.467 256+0 records in 00:05:16.467 256+0 records out 00:05:16.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00750193 s, 140 MB/s 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.467 256+0 records in 00:05:16.467 256+0 records out 00:05:16.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241145 s, 43.5 MB/s 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.467 256+0 records in 00:05:16.467 256+0 records out 00:05:16.467 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277733 s, 37.8 MB/s 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.467 08:05:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.726 08:05:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.986 08:05:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.245 08:05:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.245 08:05:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.245 08:05:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.504 08:05:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.504 08:05:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.763 08:05:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.700 [2024-11-17 08:05:23.430557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.700 [2024-11-17 08:05:23.505354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.700 [2024-11-17 08:05:23.505360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.700 [2024-11-17 08:05:23.644120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.700 [2024-11-17 08:05:23.644213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.233 spdk_app_start Round 2 00:05:21.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.233 08:05:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.233 08:05:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.233 08:05:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59112 /var/tmp/spdk-nbd.sock 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.233 08:05:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.233 08:05:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.233 Malloc0 00:05:21.492 08:05:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.751 Malloc1 00:05:21.751 08:05:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.751 08:05:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.011 /dev/nbd0 00:05:22.011 08:05:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.011 08:05:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.011 1+0 records in 00:05:22.011 1+0 records out 00:05:22.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029686 s, 13.8 MB/s 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.011 08:05:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.011 08:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.011 08:05:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.011 08:05:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.270 /dev/nbd1 00:05:22.270 08:05:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.270 08:05:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.270 08:05:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.270 08:05:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.270 08:05:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.270 08:05:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.271 1+0 records in 00:05:22.271 1+0 records out 00:05:22.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321033 s, 12.8 MB/s 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.271 08:05:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.271 08:05:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.271 08:05:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.271 08:05:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.271 08:05:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.271 08:05:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.530 { 00:05:22.530 "nbd_device": "/dev/nbd0", 00:05:22.530 "bdev_name": "Malloc0" 00:05:22.530 }, 00:05:22.530 { 00:05:22.530 "nbd_device": "/dev/nbd1", 00:05:22.530 "bdev_name": "Malloc1" 00:05:22.530 } 00:05:22.530 ]' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.530 { 00:05:22.530 "nbd_device": "/dev/nbd0", 00:05:22.530 "bdev_name": "Malloc0" 00:05:22.530 }, 00:05:22.530 { 00:05:22.530 "nbd_device": "/dev/nbd1", 00:05:22.530 "bdev_name": "Malloc1" 00:05:22.530 } 00:05:22.530 ]' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.530 /dev/nbd1' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.530 /dev/nbd1' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.530 256+0 records in 00:05:22.530 256+0 records out 00:05:22.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105399 s, 99.5 MB/s 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.530 256+0 records in 00:05:22.530 256+0 records out 00:05:22.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263282 s, 39.8 MB/s 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.530 256+0 records in 00:05:22.530 256+0 records out 00:05:22.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262772 s, 39.9 MB/s 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.530 08:05:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.531 08:05:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.531 08:05:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.531 08:05:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.789 08:05:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.790 08:05:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.049 08:05:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.049 08:05:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.049 08:05:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.049 08:05:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.050 08:05:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.050 08:05:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.050 08:05:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.050 08:05:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.050 08:05:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.050 08:05:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.050 08:05:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.309 08:05:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.309 08:05:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.309 08:05:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.568 08:05:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.568 08:05:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.828 08:05:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.768 [2024-11-17 08:05:29.493761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.768 [2024-11-17 08:05:29.569003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.768 [2024-11-17 08:05:29.569014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.768 [2024-11-17 08:05:29.700818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.768 [2024-11-17 08:05:29.700904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.302 08:05:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59112 /var/tmp/spdk-nbd.sock 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59112 ']' 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.302 08:05:31 event.app_repeat -- event/event.sh@39 -- # killprocess 59112 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59112 ']' 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59112 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59112 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.302 killing process with pid 59112 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59112' 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59112 00:05:27.302 08:05:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59112 00:05:27.870 spdk_app_start is called in Round 0. 00:05:27.870 Shutdown signal received, stop current app iteration 00:05:27.870 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:27.870 spdk_app_start is called in Round 1. 00:05:27.870 Shutdown signal received, stop current app iteration 00:05:27.870 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:27.870 spdk_app_start is called in Round 2. 00:05:27.870 Shutdown signal received, stop current app iteration 00:05:27.870 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:27.870 spdk_app_start is called in Round 3. 00:05:27.870 Shutdown signal received, stop current app iteration 00:05:27.870 08:05:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.870 08:05:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:27.870 00:05:27.870 real 0m20.104s 00:05:27.870 user 0m44.894s 00:05:27.870 sys 0m2.468s 00:05:27.870 08:05:32 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.870 08:05:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.870 ************************************ 00:05:27.870 END TEST app_repeat 00:05:27.870 ************************************ 00:05:27.870 08:05:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.870 08:05:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.870 08:05:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.870 08:05:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.870 08:05:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.870 ************************************ 00:05:27.870 START TEST cpu_locks 00:05:27.870 ************************************ 00:05:27.870 08:05:32 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.870 * Looking for test storage... 00:05:27.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.870 08:05:32 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.870 08:05:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.870 08:05:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.130 08:05:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.130 --rc genhtml_branch_coverage=1 00:05:28.130 --rc genhtml_function_coverage=1 00:05:28.130 --rc genhtml_legend=1 00:05:28.130 --rc geninfo_all_blocks=1 00:05:28.130 --rc geninfo_unexecuted_blocks=1 00:05:28.130 00:05:28.130 ' 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.130 --rc genhtml_branch_coverage=1 00:05:28.130 --rc genhtml_function_coverage=1 00:05:28.130 --rc genhtml_legend=1 00:05:28.130 --rc geninfo_all_blocks=1 00:05:28.130 --rc geninfo_unexecuted_blocks=1 00:05:28.130 00:05:28.130 ' 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.130 --rc genhtml_branch_coverage=1 00:05:28.130 --rc genhtml_function_coverage=1 00:05:28.130 --rc genhtml_legend=1 00:05:28.130 --rc geninfo_all_blocks=1 00:05:28.130 --rc geninfo_unexecuted_blocks=1 00:05:28.130 00:05:28.130 ' 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.130 --rc genhtml_branch_coverage=1 00:05:28.130 --rc genhtml_function_coverage=1 00:05:28.130 --rc genhtml_legend=1 00:05:28.130 --rc geninfo_all_blocks=1 00:05:28.130 --rc geninfo_unexecuted_blocks=1 00:05:28.130 00:05:28.130 ' 00:05:28.130 08:05:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.130 08:05:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.130 08:05:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.130 08:05:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.130 08:05:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.130 ************************************ 00:05:28.130 START TEST default_locks 00:05:28.130 ************************************ 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59565 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59565 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59565 ']' 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.130 08:05:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.130 [2024-11-17 08:05:33.049119] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:28.130 [2024-11-17 08:05:33.049265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59565 ] 00:05:28.389 [2024-11-17 08:05:33.214202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.389 [2024-11-17 08:05:33.302375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59565 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59565 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59565 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59565 ']' 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59565 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59565 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59565' 00:05:29.327 killing process with pid 59565 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59565 00:05:29.327 08:05:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59565 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59565 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59565 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59565 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59565 ']' 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59565) - No such process 00:05:31.234 ERROR: process (pid: 59565) is no longer running 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.234 00:05:31.234 real 0m3.001s 00:05:31.234 user 0m3.163s 00:05:31.234 sys 0m0.494s 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.234 08:05:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 ************************************ 00:05:31.234 END TEST default_locks 00:05:31.234 ************************************ 00:05:31.234 08:05:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.234 08:05:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.234 08:05:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.234 08:05:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 ************************************ 00:05:31.234 START TEST default_locks_via_rpc 00:05:31.234 ************************************ 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59629 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59629 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59629 ']' 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.234 08:05:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.234 [2024-11-17 08:05:36.126778] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:31.234 [2024-11-17 08:05:36.126956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59629 ] 00:05:31.493 [2024-11-17 08:05:36.306417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.493 [2024-11-17 08:05:36.388986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.062 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59629 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59629 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59629 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59629 ']' 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59629 00:05:32.321 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.581 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.581 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59629 00:05:32.581 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.581 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.581 killing process with pid 59629 00:05:32.581 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59629' 00:05:32.581 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59629 00:05:32.581 08:05:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59629 00:05:34.487 00:05:34.487 real 0m2.968s 00:05:34.487 user 0m3.120s 00:05:34.487 sys 0m0.488s 00:05:34.487 08:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.487 08:05:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.487 ************************************ 00:05:34.487 END TEST default_locks_via_rpc 00:05:34.487 ************************************ 00:05:34.487 08:05:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:34.487 08:05:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.487 08:05:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.487 08:05:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.487 ************************************ 00:05:34.487 START TEST non_locking_app_on_locked_coremask 00:05:34.487 ************************************ 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59692 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59692 /var/tmp/spdk.sock 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59692 ']' 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.487 08:05:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.487 [2024-11-17 08:05:39.155460] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:34.487 [2024-11-17 08:05:39.155640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59692 ] 00:05:34.487 [2024-11-17 08:05:39.334057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.487 [2024-11-17 08:05:39.411407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59708 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59708 /var/tmp/spdk2.sock 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59708 ']' 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.424 08:05:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.424 [2024-11-17 08:05:40.187597] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:35.424 [2024-11-17 08:05:40.187776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59708 ] 00:05:35.424 [2024-11-17 08:05:40.361948] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.424 [2024-11-17 08:05:40.361993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.684 [2024-11-17 08:05:40.524234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.063 08:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.063 08:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.063 08:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59692 00:05:37.063 08:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59692 00:05:37.063 08:05:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59692 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59692 ']' 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59692 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59692 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.633 killing process with pid 59692 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59692' 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59692 00:05:37.633 08:05:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59692 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59708 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59708 ']' 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59708 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59708 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.925 killing process with pid 59708 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59708' 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59708 00:05:40.925 08:05:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59708 00:05:42.831 00:05:42.831 real 0m8.384s 00:05:42.831 user 0m8.869s 00:05:42.831 sys 0m1.116s 00:05:42.831 08:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.831 08:05:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.831 ************************************ 00:05:42.831 END TEST non_locking_app_on_locked_coremask 00:05:42.831 ************************************ 00:05:42.831 08:05:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.831 08:05:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.831 08:05:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.831 08:05:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.831 ************************************ 00:05:42.831 START TEST locking_app_on_unlocked_coremask 00:05:42.831 ************************************ 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59822 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59822 /var/tmp/spdk.sock 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59822 ']' 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.831 08:05:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.831 [2024-11-17 08:05:47.583609] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:42.831 [2024-11-17 08:05:47.583818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59822 ] 00:05:42.831 [2024-11-17 08:05:47.759948] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.831 [2024-11-17 08:05:47.759990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.831 [2024-11-17 08:05:47.838329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59838 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59838 /var/tmp/spdk2.sock 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59838 ']' 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.769 08:05:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.769 [2024-11-17 08:05:48.603173] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:43.769 [2024-11-17 08:05:48.603366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59838 ] 00:05:44.028 [2024-11-17 08:05:48.793205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.028 [2024-11-17 08:05:48.950390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.415 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.415 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.415 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59838 00:05:45.415 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.415 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59838 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59822 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59822 ']' 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59822 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59822 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.016 killing process with pid 59822 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59822' 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59822 00:05:46.016 08:05:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59822 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59838 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59838 ']' 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59838 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59838 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.313 killing process with pid 59838 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59838' 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59838 00:05:49.313 08:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59838 00:05:51.218 00:05:51.218 real 0m8.364s 00:05:51.218 user 0m8.888s 00:05:51.218 sys 0m1.177s 00:05:51.218 08:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.218 08:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.218 ************************************ 00:05:51.218 END TEST locking_app_on_unlocked_coremask 00:05:51.218 ************************************ 00:05:51.218 08:05:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.218 08:05:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.218 08:05:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.218 08:05:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.218 ************************************ 00:05:51.218 START TEST locking_app_on_locked_coremask 00:05:51.218 ************************************ 00:05:51.218 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59951 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59951 /var/tmp/spdk.sock 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59951 ']' 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.219 08:05:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.219 [2024-11-17 08:05:56.003127] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:51.219 [2024-11-17 08:05:56.003336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:05:51.219 [2024-11-17 08:05:56.176981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.478 [2024-11-17 08:05:56.255203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59967 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59967 /var/tmp/spdk2.sock 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59967 /var/tmp/spdk2.sock 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59967 /var/tmp/spdk2.sock 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59967 ']' 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.047 08:05:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.047 [2024-11-17 08:05:56.961102] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:52.047 [2024-11-17 08:05:56.961292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:05:52.306 [2024-11-17 08:05:57.138629] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59951 has claimed it. 00:05:52.306 [2024-11-17 08:05:57.138708] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.873 ERROR: process (pid: 59967) is no longer running 00:05:52.873 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59967) - No such process 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59951 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59951 00:05:52.873 08:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59951 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59951 ']' 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59951 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59951 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.133 killing process with pid 59951 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59951' 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59951 00:05:53.133 08:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59951 00:05:55.037 00:05:55.037 real 0m3.838s 00:05:55.037 user 0m4.223s 00:05:55.037 sys 0m0.691s 00:05:55.037 08:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.037 08:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.037 ************************************ 00:05:55.037 END TEST locking_app_on_locked_coremask 00:05:55.037 ************************************ 00:05:55.037 08:05:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.037 08:05:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.037 08:05:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.037 08:05:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.037 ************************************ 00:05:55.037 START TEST locking_overlapped_coremask 00:05:55.037 ************************************ 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60033 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60033 /var/tmp/spdk.sock 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60033 ']' 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.037 08:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.037 [2024-11-17 08:05:59.892066] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:55.037 [2024-11-17 08:05:59.892752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60033 ] 00:05:55.296 [2024-11-17 08:06:00.069754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.296 [2024-11-17 08:06:00.155012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.296 [2024-11-17 08:06:00.155144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.297 [2024-11-17 08:06:00.155164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60051 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60051 /var/tmp/spdk2.sock 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60051 /var/tmp/spdk2.sock 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60051 /var/tmp/spdk2.sock 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60051 ']' 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.865 08:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.123 [2024-11-17 08:06:00.969824] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:56.123 [2024-11-17 08:06:00.970042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60051 ] 00:05:56.381 [2024-11-17 08:06:01.167761] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60033 has claimed it. 00:05:56.381 [2024-11-17 08:06:01.167873] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.640 ERROR: process (pid: 60051) is no longer running 00:05:56.640 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60051) - No such process 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60033 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60033 ']' 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60033 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.640 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60033 00:05:56.899 killing process with pid 60033 00:05:56.899 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.899 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.899 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60033' 00:05:56.899 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60033 00:05:56.899 08:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60033 00:05:58.804 00:05:58.804 real 0m3.681s 00:05:58.804 user 0m10.167s 00:05:58.804 sys 0m0.546s 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.804 ************************************ 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.804 END TEST locking_overlapped_coremask 00:05:58.804 ************************************ 00:05:58.804 08:06:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.804 08:06:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.804 08:06:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.804 08:06:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.804 ************************************ 00:05:58.804 START TEST locking_overlapped_coremask_via_rpc 00:05:58.804 ************************************ 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60110 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60110 /var/tmp/spdk.sock 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60110 ']' 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.804 08:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.804 [2024-11-17 08:06:03.599847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:58.804 [2024-11-17 08:06:03.599993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60110 ] 00:05:58.804 [2024-11-17 08:06:03.768409] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.804 [2024-11-17 08:06:03.768451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.064 [2024-11-17 08:06:03.851528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.064 [2024-11-17 08:06:03.851640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.064 [2024-11-17 08:06:03.851642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60122 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60122 /var/tmp/spdk2.sock 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60122 ']' 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.632 08:06:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.632 [2024-11-17 08:06:04.630768] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:59.632 [2024-11-17 08:06:04.630943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60122 ] 00:05:59.892 [2024-11-17 08:06:04.824922] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.892 [2024-11-17 08:06:04.824982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.151 [2024-11-17 08:06:05.000601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.151 [2024-11-17 08:06:05.004254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.151 [2024-11-17 08:06:05.004273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.529 [2024-11-17 08:06:06.355342] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60110 has claimed it. 00:06:01.529 request: 00:06:01.529 { 00:06:01.529 "method": "framework_enable_cpumask_locks", 00:06:01.529 "req_id": 1 00:06:01.529 } 00:06:01.529 Got JSON-RPC error response 00:06:01.529 response: 00:06:01.529 { 00:06:01.529 "code": -32603, 00:06:01.529 "message": "Failed to claim CPU core: 2" 00:06:01.529 } 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60110 /var/tmp/spdk.sock 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60110 ']' 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.529 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.788 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.788 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60122 /var/tmp/spdk2.sock 00:06:01.789 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60122 ']' 00:06:01.789 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.789 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.789 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.789 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.789 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.048 ************************************ 00:06:02.048 END TEST locking_overlapped_coremask_via_rpc 00:06:02.048 ************************************ 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.048 00:06:02.048 real 0m3.441s 00:06:02.048 user 0m1.362s 00:06:02.048 sys 0m0.165s 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.048 08:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.048 08:06:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.048 08:06:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60110 ]] 00:06:02.048 08:06:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60110 00:06:02.048 08:06:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60110 ']' 00:06:02.048 08:06:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60110 00:06:02.048 08:06:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.048 08:06:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.048 08:06:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60110 00:06:02.048 08:06:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.048 killing process with pid 60110 00:06:02.048 08:06:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.048 08:06:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60110' 00:06:02.048 08:06:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60110 00:06:02.048 08:06:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60110 00:06:03.953 08:06:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60122 ]] 00:06:03.953 08:06:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60122 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60122 ']' 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60122 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60122 00:06:03.953 killing process with pid 60122 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60122' 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60122 00:06:03.953 08:06:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60122 00:06:05.858 08:06:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.858 08:06:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:05.858 08:06:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60110 ]] 00:06:05.858 08:06:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60110 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60110 ']' 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60110 00:06:05.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60110) - No such process 00:06:05.858 Process with pid 60110 is not found 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60110 is not found' 00:06:05.858 08:06:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60122 ]] 00:06:05.858 08:06:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60122 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60122 ']' 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60122 00:06:05.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60122) - No such process 00:06:05.858 Process with pid 60122 is not found 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60122 is not found' 00:06:05.858 08:06:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.858 ************************************ 00:06:05.858 END TEST cpu_locks 00:06:05.858 ************************************ 00:06:05.858 00:06:05.858 real 0m37.843s 00:06:05.858 user 1m6.948s 00:06:05.858 sys 0m5.577s 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.858 08:06:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.858 00:06:05.858 real 1m8.328s 00:06:05.858 user 2m8.808s 00:06:05.858 sys 0m8.997s 00:06:05.858 08:06:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.858 08:06:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.858 ************************************ 00:06:05.858 END TEST event 00:06:05.858 ************************************ 00:06:05.858 08:06:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.858 08:06:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.858 08:06:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.858 08:06:10 -- common/autotest_common.sh@10 -- # set +x 00:06:05.858 ************************************ 00:06:05.858 START TEST thread 00:06:05.858 ************************************ 00:06:05.858 08:06:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.858 * Looking for test storage... 00:06:05.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:05.858 08:06:10 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.858 08:06:10 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.858 08:06:10 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.858 08:06:10 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.858 08:06:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.858 08:06:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.858 08:06:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.858 08:06:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.858 08:06:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.858 08:06:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.858 08:06:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.858 08:06:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.858 08:06:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.118 08:06:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.118 08:06:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.118 08:06:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:06.118 08:06:10 thread -- scripts/common.sh@345 -- # : 1 00:06:06.118 08:06:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.118 08:06:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.118 08:06:10 thread -- scripts/common.sh@365 -- # decimal 1 00:06:06.118 08:06:10 thread -- scripts/common.sh@353 -- # local d=1 00:06:06.118 08:06:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.118 08:06:10 thread -- scripts/common.sh@355 -- # echo 1 00:06:06.118 08:06:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.118 08:06:10 thread -- scripts/common.sh@366 -- # decimal 2 00:06:06.118 08:06:10 thread -- scripts/common.sh@353 -- # local d=2 00:06:06.118 08:06:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.118 08:06:10 thread -- scripts/common.sh@355 -- # echo 2 00:06:06.118 08:06:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.118 08:06:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.118 08:06:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.118 08:06:10 thread -- scripts/common.sh@368 -- # return 0 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.118 --rc genhtml_branch_coverage=1 00:06:06.118 --rc genhtml_function_coverage=1 00:06:06.118 --rc genhtml_legend=1 00:06:06.118 --rc geninfo_all_blocks=1 00:06:06.118 --rc geninfo_unexecuted_blocks=1 00:06:06.118 00:06:06.118 ' 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.118 --rc genhtml_branch_coverage=1 00:06:06.118 --rc genhtml_function_coverage=1 00:06:06.118 --rc genhtml_legend=1 00:06:06.118 --rc geninfo_all_blocks=1 00:06:06.118 --rc geninfo_unexecuted_blocks=1 00:06:06.118 00:06:06.118 ' 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.118 --rc genhtml_branch_coverage=1 00:06:06.118 --rc genhtml_function_coverage=1 00:06:06.118 --rc genhtml_legend=1 00:06:06.118 --rc geninfo_all_blocks=1 00:06:06.118 --rc geninfo_unexecuted_blocks=1 00:06:06.118 00:06:06.118 ' 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.118 --rc genhtml_branch_coverage=1 00:06:06.118 --rc genhtml_function_coverage=1 00:06:06.118 --rc genhtml_legend=1 00:06:06.118 --rc geninfo_all_blocks=1 00:06:06.118 --rc geninfo_unexecuted_blocks=1 00:06:06.118 00:06:06.118 ' 00:06:06.118 08:06:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.118 08:06:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.118 ************************************ 00:06:06.118 START TEST thread_poller_perf 00:06:06.118 ************************************ 00:06:06.118 08:06:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.118 [2024-11-17 08:06:10.933979] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:06.118 [2024-11-17 08:06:10.934148] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60302 ] 00:06:06.118 [2024-11-17 08:06:11.120191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.377 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.377 [2024-11-17 08:06:11.243221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.754 [2024-11-17T08:06:12.766Z] ====================================== 00:06:07.754 [2024-11-17T08:06:12.766Z] busy:2212887778 (cyc) 00:06:07.754 [2024-11-17T08:06:12.766Z] total_run_count: 366000 00:06:07.754 [2024-11-17T08:06:12.766Z] tsc_hz: 2200000000 (cyc) 00:06:07.754 [2024-11-17T08:06:12.766Z] ====================================== 00:06:07.754 [2024-11-17T08:06:12.766Z] poller_cost: 6046 (cyc), 2748 (nsec) 00:06:07.754 00:06:07.754 real 0m1.542s 00:06:07.754 user 0m1.344s 00:06:07.754 sys 0m0.091s 00:06:07.754 08:06:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.754 ************************************ 00:06:07.754 END TEST thread_poller_perf 00:06:07.754 ************************************ 00:06:07.754 08:06:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.754 08:06:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.754 08:06:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:07.754 08:06:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.754 08:06:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.754 ************************************ 00:06:07.754 START TEST thread_poller_perf 00:06:07.754 ************************************ 00:06:07.754 08:06:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.754 [2024-11-17 08:06:12.527424] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:07.754 [2024-11-17 08:06:12.527575] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:06:07.754 [2024-11-17 08:06:12.709361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.013 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.013 [2024-11-17 08:06:12.792070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.949 [2024-11-17T08:06:13.961Z] ====================================== 00:06:08.949 [2024-11-17T08:06:13.961Z] busy:2203375302 (cyc) 00:06:08.949 [2024-11-17T08:06:13.961Z] total_run_count: 4831000 00:06:08.949 [2024-11-17T08:06:13.961Z] tsc_hz: 2200000000 (cyc) 00:06:08.949 [2024-11-17T08:06:13.961Z] ====================================== 00:06:08.949 [2024-11-17T08:06:13.961Z] poller_cost: 456 (cyc), 207 (nsec) 00:06:09.208 00:06:09.208 real 0m1.483s 00:06:09.208 user 0m1.291s 00:06:09.208 sys 0m0.085s 00:06:09.208 08:06:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.208 08:06:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.208 ************************************ 00:06:09.208 END TEST thread_poller_perf 00:06:09.208 ************************************ 00:06:09.208 08:06:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.208 ************************************ 00:06:09.208 END TEST thread 00:06:09.208 ************************************ 00:06:09.208 00:06:09.208 real 0m3.312s 00:06:09.208 user 0m2.789s 00:06:09.208 sys 0m0.308s 00:06:09.208 08:06:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.208 08:06:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.208 08:06:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:09.208 08:06:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:09.208 08:06:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.208 08:06:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.208 08:06:14 -- common/autotest_common.sh@10 -- # set +x 00:06:09.208 ************************************ 00:06:09.208 START TEST app_cmdline 00:06:09.208 ************************************ 00:06:09.208 08:06:14 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:09.208 * Looking for test storage... 00:06:09.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:09.208 08:06:14 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.208 08:06:14 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.208 08:06:14 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.467 08:06:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.467 --rc genhtml_branch_coverage=1 00:06:09.467 --rc genhtml_function_coverage=1 00:06:09.467 --rc genhtml_legend=1 00:06:09.467 --rc geninfo_all_blocks=1 00:06:09.467 --rc geninfo_unexecuted_blocks=1 00:06:09.467 00:06:09.467 ' 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.467 --rc genhtml_branch_coverage=1 00:06:09.467 --rc genhtml_function_coverage=1 00:06:09.467 --rc genhtml_legend=1 00:06:09.467 --rc geninfo_all_blocks=1 00:06:09.467 --rc geninfo_unexecuted_blocks=1 00:06:09.467 00:06:09.467 ' 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.467 --rc genhtml_branch_coverage=1 00:06:09.467 --rc genhtml_function_coverage=1 00:06:09.467 --rc genhtml_legend=1 00:06:09.467 --rc geninfo_all_blocks=1 00:06:09.467 --rc geninfo_unexecuted_blocks=1 00:06:09.467 00:06:09.467 ' 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.467 --rc genhtml_branch_coverage=1 00:06:09.467 --rc genhtml_function_coverage=1 00:06:09.467 --rc genhtml_legend=1 00:06:09.467 --rc geninfo_all_blocks=1 00:06:09.467 --rc geninfo_unexecuted_blocks=1 00:06:09.467 00:06:09.467 ' 00:06:09.467 08:06:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.467 08:06:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60427 00:06:09.467 08:06:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60427 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60427 ']' 00:06:09.467 08:06:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.467 08:06:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.467 [2024-11-17 08:06:14.368288] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:09.467 [2024-11-17 08:06:14.368488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60427 ] 00:06:09.727 [2024-11-17 08:06:14.546941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.727 [2024-11-17 08:06:14.625940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:10.665 { 00:06:10.665 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:10.665 "fields": { 00:06:10.665 "major": 25, 00:06:10.665 "minor": 1, 00:06:10.665 "patch": 0, 00:06:10.665 "suffix": "-pre", 00:06:10.665 "commit": "83e8405e4" 00:06:10.665 } 00:06:10.665 } 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.665 08:06:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:10.665 08:06:15 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.924 request: 00:06:10.924 { 00:06:10.924 "method": "env_dpdk_get_mem_stats", 00:06:10.924 "req_id": 1 00:06:10.924 } 00:06:10.924 Got JSON-RPC error response 00:06:10.924 response: 00:06:10.924 { 00:06:10.924 "code": -32601, 00:06:10.924 "message": "Method not found" 00:06:10.924 } 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.924 08:06:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60427 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60427 ']' 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60427 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.924 08:06:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60427 00:06:11.183 08:06:15 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.183 08:06:15 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.183 killing process with pid 60427 00:06:11.183 08:06:15 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60427' 00:06:11.183 08:06:15 app_cmdline -- common/autotest_common.sh@973 -- # kill 60427 00:06:11.183 08:06:15 app_cmdline -- common/autotest_common.sh@978 -- # wait 60427 00:06:12.560 00:06:12.560 real 0m3.498s 00:06:12.560 user 0m4.089s 00:06:12.560 sys 0m0.481s 00:06:12.560 08:06:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.560 ************************************ 00:06:12.560 END TEST app_cmdline 00:06:12.560 ************************************ 00:06:12.560 08:06:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.820 08:06:17 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:12.820 08:06:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.820 08:06:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.820 08:06:17 -- common/autotest_common.sh@10 -- # set +x 00:06:12.820 ************************************ 00:06:12.820 START TEST version 00:06:12.820 ************************************ 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:12.820 * Looking for test storage... 00:06:12.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.820 08:06:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.820 08:06:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.820 08:06:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.820 08:06:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.820 08:06:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.820 08:06:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.820 08:06:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.820 08:06:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.820 08:06:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.820 08:06:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.820 08:06:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.820 08:06:17 version -- scripts/common.sh@344 -- # case "$op" in 00:06:12.820 08:06:17 version -- scripts/common.sh@345 -- # : 1 00:06:12.820 08:06:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.820 08:06:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.820 08:06:17 version -- scripts/common.sh@365 -- # decimal 1 00:06:12.820 08:06:17 version -- scripts/common.sh@353 -- # local d=1 00:06:12.820 08:06:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.820 08:06:17 version -- scripts/common.sh@355 -- # echo 1 00:06:12.820 08:06:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.820 08:06:17 version -- scripts/common.sh@366 -- # decimal 2 00:06:12.820 08:06:17 version -- scripts/common.sh@353 -- # local d=2 00:06:12.820 08:06:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.820 08:06:17 version -- scripts/common.sh@355 -- # echo 2 00:06:12.820 08:06:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.820 08:06:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.820 08:06:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.820 08:06:17 version -- scripts/common.sh@368 -- # return 0 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.820 --rc genhtml_branch_coverage=1 00:06:12.820 --rc genhtml_function_coverage=1 00:06:12.820 --rc genhtml_legend=1 00:06:12.820 --rc geninfo_all_blocks=1 00:06:12.820 --rc geninfo_unexecuted_blocks=1 00:06:12.820 00:06:12.820 ' 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.820 --rc genhtml_branch_coverage=1 00:06:12.820 --rc genhtml_function_coverage=1 00:06:12.820 --rc genhtml_legend=1 00:06:12.820 --rc geninfo_all_blocks=1 00:06:12.820 --rc geninfo_unexecuted_blocks=1 00:06:12.820 00:06:12.820 ' 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.820 --rc genhtml_branch_coverage=1 00:06:12.820 --rc genhtml_function_coverage=1 00:06:12.820 --rc genhtml_legend=1 00:06:12.820 --rc geninfo_all_blocks=1 00:06:12.820 --rc geninfo_unexecuted_blocks=1 00:06:12.820 00:06:12.820 ' 00:06:12.820 08:06:17 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.820 --rc genhtml_branch_coverage=1 00:06:12.820 --rc genhtml_function_coverage=1 00:06:12.820 --rc genhtml_legend=1 00:06:12.820 --rc geninfo_all_blocks=1 00:06:12.820 --rc geninfo_unexecuted_blocks=1 00:06:12.820 00:06:12.820 ' 00:06:12.820 08:06:17 version -- app/version.sh@17 -- # get_header_version major 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # cut -f2 00:06:12.820 08:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.820 08:06:17 version -- app/version.sh@17 -- # major=25 00:06:12.820 08:06:17 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.820 08:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # cut -f2 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.820 08:06:17 version -- app/version.sh@18 -- # minor=1 00:06:12.820 08:06:17 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.820 08:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # cut -f2 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.820 08:06:17 version -- app/version.sh@19 -- # patch=0 00:06:12.820 08:06:17 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # cut -f2 00:06:12.820 08:06:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:12.820 08:06:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.820 08:06:17 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.820 08:06:17 version -- app/version.sh@22 -- # version=25.1 00:06:12.820 08:06:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.820 08:06:17 version -- app/version.sh@28 -- # version=25.1rc0 00:06:12.820 08:06:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:12.820 08:06:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:13.080 08:06:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:13.080 08:06:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:13.080 00:06:13.080 real 0m0.248s 00:06:13.080 user 0m0.162s 00:06:13.080 sys 0m0.118s 00:06:13.080 08:06:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.080 08:06:17 version -- common/autotest_common.sh@10 -- # set +x 00:06:13.080 ************************************ 00:06:13.080 END TEST version 00:06:13.080 ************************************ 00:06:13.080 08:06:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:13.080 08:06:17 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:13.080 08:06:17 -- spdk/autotest.sh@194 -- # uname -s 00:06:13.080 08:06:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:13.080 08:06:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:13.080 08:06:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:13.080 08:06:17 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:13.080 08:06:17 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:13.080 08:06:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.080 08:06:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.080 08:06:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.080 ************************************ 00:06:13.080 START TEST blockdev_nvme 00:06:13.080 ************************************ 00:06:13.080 08:06:17 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:13.080 * Looking for test storage... 00:06:13.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:13.080 08:06:17 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.080 08:06:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.080 08:06:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.080 08:06:18 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:13.080 08:06:18 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.340 08:06:18 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.340 --rc genhtml_branch_coverage=1 00:06:13.340 --rc genhtml_function_coverage=1 00:06:13.340 --rc genhtml_legend=1 00:06:13.340 --rc geninfo_all_blocks=1 00:06:13.340 --rc geninfo_unexecuted_blocks=1 00:06:13.340 00:06:13.340 ' 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.340 --rc genhtml_branch_coverage=1 00:06:13.340 --rc genhtml_function_coverage=1 00:06:13.340 --rc genhtml_legend=1 00:06:13.340 --rc geninfo_all_blocks=1 00:06:13.340 --rc geninfo_unexecuted_blocks=1 00:06:13.340 00:06:13.340 ' 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.340 --rc genhtml_branch_coverage=1 00:06:13.340 --rc genhtml_function_coverage=1 00:06:13.340 --rc genhtml_legend=1 00:06:13.340 --rc geninfo_all_blocks=1 00:06:13.340 --rc geninfo_unexecuted_blocks=1 00:06:13.340 00:06:13.340 ' 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.340 --rc genhtml_branch_coverage=1 00:06:13.340 --rc genhtml_function_coverage=1 00:06:13.340 --rc genhtml_legend=1 00:06:13.340 --rc geninfo_all_blocks=1 00:06:13.340 --rc geninfo_unexecuted_blocks=1 00:06:13.340 00:06:13.340 ' 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:13.340 08:06:18 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60605 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60605 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60605 ']' 00:06:13.340 08:06:18 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.340 08:06:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.340 [2024-11-17 08:06:18.205026] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:13.340 [2024-11-17 08:06:18.205195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:06:13.600 [2024-11-17 08:06:18.368212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.600 [2024-11-17 08:06:18.448820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:14.536 08:06:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.536 08:06:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.795 08:06:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.795 08:06:19 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.795 08:06:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:14.795 08:06:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.795 08:06:19 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:14.795 08:06:19 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.795 08:06:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:14.795 08:06:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:14.796 08:06:19 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a40b4078-533f-4abc-9d82-64950c01c2db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a40b4078-533f-4abc-9d82-64950c01c2db",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "027c9257-6907-4ed5-b41d-75be3025823c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "027c9257-6907-4ed5-b41d-75be3025823c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e8a3a19d-28a6-4b74-b433-ba0fc494952f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e8a3a19d-28a6-4b74-b433-ba0fc494952f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5383c431-b623-4547-800d-5b52303e3883"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5383c431-b623-4547-800d-5b52303e3883",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "799cf19c-5253-4c09-a630-7a2cfe826229"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "799cf19c-5253-4c09-a630-7a2cfe826229",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "04635c2c-db78-46a9-8d35-85f6a828caf8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "04635c2c-db78-46a9-8d35-85f6a828caf8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:14.796 08:06:19 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:14.796 08:06:19 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:14.796 08:06:19 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:14.796 08:06:19 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60605 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60605 ']' 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60605 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60605 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.796 killing process with pid 60605 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60605' 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60605 00:06:14.796 08:06:19 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60605 00:06:16.700 08:06:21 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:16.700 08:06:21 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:16.700 08:06:21 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:16.700 08:06:21 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.700 08:06:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.700 ************************************ 00:06:16.700 START TEST bdev_hello_world 00:06:16.700 ************************************ 00:06:16.700 08:06:21 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:16.700 [2024-11-17 08:06:21.520559] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:16.700 [2024-11-17 08:06:21.520736] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60689 ] 00:06:16.700 [2024-11-17 08:06:21.697676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.959 [2024-11-17 08:06:21.779507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.526 [2024-11-17 08:06:22.328155] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:17.526 [2024-11-17 08:06:22.328202] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:17.526 [2024-11-17 08:06:22.328239] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:17.526 [2024-11-17 08:06:22.330749] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:17.527 [2024-11-17 08:06:22.331369] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:17.527 [2024-11-17 08:06:22.331451] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:17.527 [2024-11-17 08:06:22.331690] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:17.527 00:06:17.527 [2024-11-17 08:06:22.331730] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:18.121 00:06:18.121 real 0m1.686s 00:06:18.121 user 0m1.382s 00:06:18.121 sys 0m0.197s 00:06:18.121 08:06:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.121 08:06:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:18.121 ************************************ 00:06:18.121 END TEST bdev_hello_world 00:06:18.121 ************************************ 00:06:18.410 08:06:23 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:18.410 08:06:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:18.410 08:06:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.410 08:06:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:18.410 ************************************ 00:06:18.410 START TEST bdev_bounds 00:06:18.410 ************************************ 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60733 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.410 Process bdevio pid: 60733 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60733' 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60733 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60733 ']' 00:06:18.410 08:06:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.411 08:06:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.411 08:06:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.411 08:06:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.411 08:06:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:18.411 [2024-11-17 08:06:23.256587] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:18.411 [2024-11-17 08:06:23.256779] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60733 ] 00:06:18.690 [2024-11-17 08:06:23.429812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.690 [2024-11-17 08:06:23.511292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.690 [2024-11-17 08:06:23.511357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.690 [2024-11-17 08:06:23.511374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.258 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.258 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:19.258 08:06:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:19.518 I/O targets: 00:06:19.518 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:19.518 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:19.518 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:19.518 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:19.518 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:19.518 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:19.518 00:06:19.518 00:06:19.518 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.518 http://cunit.sourceforge.net/ 00:06:19.518 00:06:19.518 00:06:19.518 Suite: bdevio tests on: Nvme3n1 00:06:19.518 Test: blockdev write read block ...passed 00:06:19.518 Test: blockdev write zeroes read block ...passed 00:06:19.518 Test: blockdev write zeroes read no split ...passed 00:06:19.518 Test: blockdev write zeroes read split ...passed 00:06:19.518 Test: blockdev write zeroes read split partial ...passed 00:06:19.518 Test: blockdev reset ...[2024-11-17 08:06:24.325227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:19.518 [2024-11-17 08:06:24.328399] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:19.518 passed 00:06:19.518 Test: blockdev write read 8 blocks ...passed 00:06:19.518 Test: blockdev write read size > 128k ...passed 00:06:19.518 Test: blockdev write read invalid size ...passed 00:06:19.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.518 Test: blockdev write read max offset ...passed 00:06:19.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.518 Test: blockdev writev readv 8 blocks ...passed 00:06:19.518 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.518 Test: blockdev writev readv block ...passed 00:06:19.518 Test: blockdev writev readv size > 128k ...passed 00:06:19.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.518 Test: blockdev comparev and writev ...[2024-11-17 08:06:24.336023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d100a000 len:0x1000 00:06:19.518 [2024-11-17 08:06:24.336134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.518 passed 00:06:19.518 Test: blockdev nvme passthru rw ...passed 00:06:19.518 Test: blockdev nvme passthru vendor specific ...passed 00:06:19.518 Test: blockdev nvme admin passthru ...[2024-11-17 08:06:24.336940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.518 [2024-11-17 08:06:24.336989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.518 passed 00:06:19.518 Test: blockdev copy ...passed 00:06:19.518 Suite: bdevio tests on: Nvme2n3 00:06:19.518 Test: blockdev write read block ...passed 00:06:19.518 Test: blockdev write zeroes read block ...passed 00:06:19.518 Test: blockdev write zeroes read no split ...passed 00:06:19.518 Test: blockdev write zeroes read split ...passed 00:06:19.518 Test: blockdev write zeroes read split partial ...passed 00:06:19.518 Test: blockdev reset ...[2024-11-17 08:06:24.395788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:19.518 [2024-11-17 08:06:24.399177] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:19.518 passed 00:06:19.518 Test: blockdev write read 8 blocks ...passed 00:06:19.518 Test: blockdev write read size > 128k ...passed 00:06:19.518 Test: blockdev write read invalid size ...passed 00:06:19.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.518 Test: blockdev write read max offset ...passed 00:06:19.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.518 Test: blockdev writev readv 8 blocks ...passed 00:06:19.518 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.518 Test: blockdev writev readv block ...passed 00:06:19.518 Test: blockdev writev readv size > 128k ...passed 00:06:19.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.518 Test: blockdev comparev and writev ...[2024-11-17 08:06:24.407006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad006000 len:0x1000 00:06:19.518 [2024-11-17 08:06:24.407086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.518 passed 00:06:19.519 Test: blockdev nvme passthru rw ...passed 00:06:19.519 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:06:24.407947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.519 [2024-11-17 08:06:24.407995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.519 passed 00:06:19.519 Test: blockdev nvme admin passthru ...passed 00:06:19.519 Test: blockdev copy ...passed 00:06:19.519 Suite: bdevio tests on: Nvme2n2 00:06:19.519 Test: blockdev write read block ...passed 00:06:19.519 Test: blockdev write zeroes read block ...passed 00:06:19.519 Test: blockdev write zeroes read no split ...passed 00:06:19.519 Test: blockdev write zeroes read split ...passed 00:06:19.519 Test: blockdev write zeroes read split partial ...passed 00:06:19.519 Test: blockdev reset ...[2024-11-17 08:06:24.465850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:19.519 [2024-11-17 08:06:24.469203] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:19.519 passed 00:06:19.519 Test: blockdev write read 8 blocks ...passed 00:06:19.519 Test: blockdev write read size > 128k ...passed 00:06:19.519 Test: blockdev write read invalid size ...passed 00:06:19.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.519 Test: blockdev write read max offset ...passed 00:06:19.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.519 Test: blockdev writev readv 8 blocks ...passed 00:06:19.519 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.519 Test: blockdev writev readv block ...passed 00:06:19.519 Test: blockdev writev readv size > 128k ...passed 00:06:19.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.519 Test: blockdev comparev and writev ...[2024-11-17 08:06:24.476788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de03c000 len:0x1000 00:06:19.519 [2024-11-17 08:06:24.476855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.519 passed 00:06:19.519 Test: blockdev nvme passthru rw ...passed 00:06:19.519 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:06:24.477692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.519 passed 00:06:19.519 Test: blockdev nvme admin passthru ...[2024-11-17 08:06:24.477722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.519 passed 00:06:19.519 Test: blockdev copy ...passed 00:06:19.519 Suite: bdevio tests on: Nvme2n1 00:06:19.519 Test: blockdev write read block ...passed 00:06:19.519 Test: blockdev write zeroes read block ...passed 00:06:19.519 Test: blockdev write zeroes read no split ...passed 00:06:19.519 Test: blockdev write zeroes read split ...passed 00:06:19.779 Test: blockdev write zeroes read split partial ...passed 00:06:19.779 Test: blockdev reset ...[2024-11-17 08:06:24.538621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:19.779 [2024-11-17 08:06:24.542144] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:19.779 passed 00:06:19.779 Test: blockdev write read 8 blocks ...passed 00:06:19.779 Test: blockdev write read size > 128k ...passed 00:06:19.779 Test: blockdev write read invalid size ...passed 00:06:19.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.779 Test: blockdev write read max offset ...passed 00:06:19.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.779 Test: blockdev writev readv 8 blocks ...passed 00:06:19.779 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.779 Test: blockdev writev readv block ...passed 00:06:19.779 Test: blockdev writev readv size > 128k ...passed 00:06:19.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.779 Test: blockdev comparev and writev ...[2024-11-17 08:06:24.549576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de038000 len:0x1000 00:06:19.779 [2024-11-17 08:06:24.549645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.779 passed 00:06:19.779 Test: blockdev nvme passthru rw ...passed 00:06:19.779 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:06:24.550493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.779 [2024-11-17 08:06:24.550526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.779 passed 00:06:19.779 Test: blockdev nvme admin passthru ...passed 00:06:19.779 Test: blockdev copy ...passed 00:06:19.779 Suite: bdevio tests on: Nvme1n1 00:06:19.779 Test: blockdev write read block ...passed 00:06:19.779 Test: blockdev write zeroes read block ...passed 00:06:19.779 Test: blockdev write zeroes read no split ...passed 00:06:19.779 Test: blockdev write zeroes read split ...passed 00:06:19.779 Test: blockdev write zeroes read split partial ...passed 00:06:19.779 Test: blockdev reset ...[2024-11-17 08:06:24.614399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:19.779 [2024-11-17 08:06:24.617653] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:19.779 passed 00:06:19.779 Test: blockdev write read 8 blocks ...passed 00:06:19.779 Test: blockdev write read size > 128k ...passed 00:06:19.779 Test: blockdev write read invalid size ...passed 00:06:19.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.779 Test: blockdev write read max offset ...passed 00:06:19.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.779 Test: blockdev writev readv 8 blocks ...passed 00:06:19.779 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.779 Test: blockdev writev readv block ...passed 00:06:19.779 Test: blockdev writev readv size > 128k ...passed 00:06:19.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.779 Test: blockdev comparev and writev ...[2024-11-17 08:06:24.625705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de034000 len:0x1000 00:06:19.779 [2024-11-17 08:06:24.625788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:19.779 passed 00:06:19.779 Test: blockdev nvme passthru rw ...passed 00:06:19.779 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:06:24.626612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:19.779 passed 00:06:19.779 Test: blockdev nvme admin passthru ...[2024-11-17 08:06:24.626660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:19.779 passed 00:06:19.779 Test: blockdev copy ...passed 00:06:19.779 Suite: bdevio tests on: Nvme0n1 00:06:19.779 Test: blockdev write read block ...passed 00:06:19.779 Test: blockdev write zeroes read block ...passed 00:06:19.779 Test: blockdev write zeroes read no split ...passed 00:06:19.779 Test: blockdev write zeroes read split ...passed 00:06:19.779 Test: blockdev write zeroes read split partial ...passed 00:06:19.779 Test: blockdev reset ...[2024-11-17 08:06:24.685501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:19.779 [2024-11-17 08:06:24.689230] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:19.779 passed 00:06:19.779 Test: blockdev write read 8 blocks ...passed 00:06:19.779 Test: blockdev write read size > 128k ...passed 00:06:19.779 Test: blockdev write read invalid size ...passed 00:06:19.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:19.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:19.779 Test: blockdev write read max offset ...passed 00:06:19.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:19.779 Test: blockdev writev readv 8 blocks ...passed 00:06:19.779 Test: blockdev writev readv 30 x 1block ...passed 00:06:19.779 Test: blockdev writev readv block ...passed 00:06:19.779 Test: blockdev writev readv size > 128k ...passed 00:06:19.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:19.779 Test: blockdev comparev and writev ...passed 00:06:19.779 Test: blockdev nvme passthru rw ...[2024-11-17 08:06:24.696447] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:19.779 separate metadata which is not supported yet. 00:06:19.779 passed 00:06:19.779 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:06:24.697263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:19.779 [2024-11-17 08:06:24.697319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:19.779 passed 00:06:19.779 Test: blockdev nvme admin passthru ...passed 00:06:19.779 Test: blockdev copy ...passed 00:06:19.779 00:06:19.779 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.779 suites 6 6 n/a 0 0 00:06:19.779 tests 138 138 138 0 0 00:06:19.779 asserts 893 893 893 0 n/a 00:06:19.779 00:06:19.779 Elapsed time = 1.137 seconds 00:06:19.779 0 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60733 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60733 ']' 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60733 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60733 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.779 killing process with pid 60733 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60733' 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60733 00:06:19.779 08:06:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60733 00:06:20.718 08:06:25 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:20.718 00:06:20.718 real 0m2.358s 00:06:20.718 user 0m6.082s 00:06:20.718 sys 0m0.324s 00:06:20.718 08:06:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.718 08:06:25 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:20.718 ************************************ 00:06:20.718 END TEST bdev_bounds 00:06:20.718 ************************************ 00:06:20.718 08:06:25 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:20.718 08:06:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:20.718 08:06:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.718 08:06:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.718 ************************************ 00:06:20.718 START TEST bdev_nbd 00:06:20.718 ************************************ 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60787 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60787 /var/tmp/spdk-nbd.sock 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60787 ']' 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.718 08:06:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:20.718 [2024-11-17 08:06:25.658740] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:20.718 [2024-11-17 08:06:25.658896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.977 [2024-11-17 08:06:25.815504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.977 [2024-11-17 08:06:25.896644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.916 1+0 records in 00:06:21.916 1+0 records out 00:06:21.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046651 s, 8.8 MB/s 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:21.916 08:06:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.176 1+0 records in 00:06:22.176 1+0 records out 00:06:22.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602678 s, 6.8 MB/s 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.176 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.435 1+0 records in 00:06:22.435 1+0 records out 00:06:22.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588148 s, 7.0 MB/s 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.435 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.436 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.436 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.436 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.436 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:22.436 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.695 1+0 records in 00:06:22.695 1+0 records out 00:06:22.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532765 s, 7.7 MB/s 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:22.695 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.954 1+0 records in 00:06:22.954 1+0 records out 00:06:22.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000835684 s, 4.9 MB/s 00:06:22.954 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.213 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.213 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.213 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.213 08:06:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.213 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.213 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:23.213 08:06:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:23.473 1+0 records in 00:06:23.473 1+0 records out 00:06:23.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834817 s, 4.9 MB/s 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:23.473 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.732 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd0", 00:06:23.732 "bdev_name": "Nvme0n1" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd1", 00:06:23.732 "bdev_name": "Nvme1n1" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd2", 00:06:23.732 "bdev_name": "Nvme2n1" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd3", 00:06:23.732 "bdev_name": "Nvme2n2" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd4", 00:06:23.732 "bdev_name": "Nvme2n3" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd5", 00:06:23.732 "bdev_name": "Nvme3n1" 00:06:23.732 } 00:06:23.732 ]' 00:06:23.732 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:23.732 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd0", 00:06:23.732 "bdev_name": "Nvme0n1" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd1", 00:06:23.732 "bdev_name": "Nvme1n1" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd2", 00:06:23.732 "bdev_name": "Nvme2n1" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd3", 00:06:23.732 "bdev_name": "Nvme2n2" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd4", 00:06:23.732 "bdev_name": "Nvme2n3" 00:06:23.732 }, 00:06:23.732 { 00:06:23.732 "nbd_device": "/dev/nbd5", 00:06:23.732 "bdev_name": "Nvme3n1" 00:06:23.732 } 00:06:23.732 ]' 00:06:23.732 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:23.732 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:23.732 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.732 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:23.733 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.733 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:23.733 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.733 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.992 08:06:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.254 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:24.514 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:24.773 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:24.773 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:24.773 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.774 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.033 08:06:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.033 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.292 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.292 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.292 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:25.553 /dev/nbd0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.553 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.812 1+0 records in 00:06:25.812 1+0 records out 00:06:25.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364818 s, 11.2 MB/s 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:25.812 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:26.071 /dev/nbd1 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.071 1+0 records in 00:06:26.071 1+0 records out 00:06:26.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493702 s, 8.3 MB/s 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.071 08:06:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.072 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.072 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.072 08:06:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:26.333 /dev/nbd10 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.333 1+0 records in 00:06:26.333 1+0 records out 00:06:26.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549009 s, 7.5 MB/s 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.333 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:26.593 /dev/nbd11 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.593 1+0 records in 00:06:26.593 1+0 records out 00:06:26.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623726 s, 6.6 MB/s 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.593 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:26.853 /dev/nbd12 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.853 1+0 records in 00:06:26.853 1+0 records out 00:06:26.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853524 s, 4.8 MB/s 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:26.853 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:26.853 /dev/nbd13 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:27.111 1+0 records in 00:06:27.111 1+0 records out 00:06:27.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668631 s, 6.1 MB/s 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.111 08:06:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd0", 00:06:27.370 "bdev_name": "Nvme0n1" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd1", 00:06:27.370 "bdev_name": "Nvme1n1" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd10", 00:06:27.370 "bdev_name": "Nvme2n1" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd11", 00:06:27.370 "bdev_name": "Nvme2n2" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd12", 00:06:27.370 "bdev_name": "Nvme2n3" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd13", 00:06:27.370 "bdev_name": "Nvme3n1" 00:06:27.370 } 00:06:27.370 ]' 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd0", 00:06:27.370 "bdev_name": "Nvme0n1" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd1", 00:06:27.370 "bdev_name": "Nvme1n1" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd10", 00:06:27.370 "bdev_name": "Nvme2n1" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd11", 00:06:27.370 "bdev_name": "Nvme2n2" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd12", 00:06:27.370 "bdev_name": "Nvme2n3" 00:06:27.370 }, 00:06:27.370 { 00:06:27.370 "nbd_device": "/dev/nbd13", 00:06:27.370 "bdev_name": "Nvme3n1" 00:06:27.370 } 00:06:27.370 ]' 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.370 /dev/nbd1 00:06:27.370 /dev/nbd10 00:06:27.370 /dev/nbd11 00:06:27.370 /dev/nbd12 00:06:27.370 /dev/nbd13' 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.370 /dev/nbd1 00:06:27.370 /dev/nbd10 00:06:27.370 /dev/nbd11 00:06:27.370 /dev/nbd12 00:06:27.370 /dev/nbd13' 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:27.370 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:27.371 256+0 records in 00:06:27.371 256+0 records out 00:06:27.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106197 s, 98.7 MB/s 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.371 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.630 256+0 records in 00:06:27.630 256+0 records out 00:06:27.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166524 s, 6.3 MB/s 00:06:27.630 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.630 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.630 256+0 records in 00:06:27.630 256+0 records out 00:06:27.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174651 s, 6.0 MB/s 00:06:27.630 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.630 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:27.889 256+0 records in 00:06:27.889 256+0 records out 00:06:27.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147657 s, 7.1 MB/s 00:06:27.889 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.889 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:28.148 256+0 records in 00:06:28.148 256+0 records out 00:06:28.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171792 s, 6.1 MB/s 00:06:28.148 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.148 08:06:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:28.148 256+0 records in 00:06:28.148 256+0 records out 00:06:28.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174371 s, 6.0 MB/s 00:06:28.148 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.148 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:28.407 256+0 records in 00:06:28.407 256+0 records out 00:06:28.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170947 s, 6.1 MB/s 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.407 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:28.408 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.408 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.667 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.236 08:06:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.236 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.495 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.063 08:06:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.063 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.322 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.322 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.322 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:30.323 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:30.891 malloc_lvol_verify 00:06:30.891 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:30.891 58d2bb47-1cf8-452e-bbaf-a3aa497080a4 00:06:30.891 08:06:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:31.151 eaa226a9-2dc0-49f4-a83a-9b34649d8697 00:06:31.151 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:31.410 /dev/nbd0 00:06:31.410 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:31.410 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:31.410 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:31.410 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:31.410 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:31.410 mke2fs 1.47.0 (5-Feb-2023) 00:06:31.410 Discarding device blocks: 0/4096 done 00:06:31.410 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:31.410 00:06:31.410 Allocating group tables: 0/1 done 00:06:31.410 Writing inode tables: 0/1 done 00:06:31.411 Creating journal (1024 blocks): done 00:06:31.411 Writing superblocks and filesystem accounting information: 0/1 done 00:06:31.411 00:06:31.411 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:31.411 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.411 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:31.411 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.411 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:31.411 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.411 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60787 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60787 ']' 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60787 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.670 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60787 00:06:31.671 killing process with pid 60787 00:06:31.671 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.671 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.671 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60787' 00:06:31.671 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60787 00:06:31.671 08:06:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60787 00:06:32.610 08:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:32.610 00:06:32.610 real 0m11.842s 00:06:32.610 user 0m16.904s 00:06:32.610 sys 0m3.689s 00:06:32.610 08:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.610 08:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:32.610 ************************************ 00:06:32.610 END TEST bdev_nbd 00:06:32.610 ************************************ 00:06:32.610 08:06:37 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:32.610 08:06:37 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:32.610 skipping fio tests on NVMe due to multi-ns failures. 00:06:32.610 08:06:37 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:32.610 08:06:37 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:32.610 08:06:37 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:32.610 08:06:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:32.610 08:06:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.610 08:06:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.610 ************************************ 00:06:32.610 START TEST bdev_verify 00:06:32.610 ************************************ 00:06:32.610 08:06:37 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:32.610 [2024-11-17 08:06:37.567393] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:32.610 [2024-11-17 08:06:37.567579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61186 ] 00:06:32.870 [2024-11-17 08:06:37.746271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.870 [2024-11-17 08:06:37.828239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.870 [2024-11-17 08:06:37.828255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.438 Running I/O for 5 seconds... 00:06:35.748 20224.00 IOPS, 79.00 MiB/s [2024-11-17T08:06:41.698Z] 19456.00 IOPS, 76.00 MiB/s [2024-11-17T08:06:43.076Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-17T08:06:43.645Z] 18656.00 IOPS, 72.88 MiB/s [2024-11-17T08:06:43.645Z] 18278.40 IOPS, 71.40 MiB/s 00:06:38.633 Latency(us) 00:06:38.633 [2024-11-17T08:06:43.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.633 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x0 length 0xbd0bd 00:06:38.633 Nvme0n1 : 5.04 1522.39 5.95 0.00 0.00 83756.65 16562.73 74353.57 00:06:38.633 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:38.633 Nvme0n1 : 5.04 1472.02 5.75 0.00 0.00 86606.77 17754.30 74830.20 00:06:38.633 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x0 length 0xa0000 00:06:38.633 Nvme1n1 : 5.05 1521.84 5.94 0.00 0.00 83680.78 20137.43 69587.32 00:06:38.633 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0xa0000 length 0xa0000 00:06:38.633 Nvme1n1 : 5.07 1476.62 5.77 0.00 0.00 86041.86 9294.20 71493.82 00:06:38.633 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x0 length 0x80000 00:06:38.633 Nvme2n1 : 5.05 1521.28 5.94 0.00 0.00 83594.02 19899.11 66250.94 00:06:38.633 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x80000 length 0x80000 00:06:38.633 Nvme2n1 : 5.09 1483.62 5.80 0.00 0.00 85654.19 16086.11 70063.94 00:06:38.633 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x0 length 0x80000 00:06:38.633 Nvme2n2 : 5.06 1529.69 5.98 0.00 0.00 83094.41 7983.48 67204.19 00:06:38.633 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x80000 length 0x80000 00:06:38.633 Nvme2n2 : 5.09 1483.11 5.79 0.00 0.00 85521.06 16324.42 68634.07 00:06:38.633 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x0 length 0x80000 00:06:38.633 Nvme2n3 : 5.06 1529.09 5.97 0.00 0.00 82958.49 8281.37 71493.82 00:06:38.633 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x80000 length 0x80000 00:06:38.633 Nvme2n3 : 5.09 1482.60 5.79 0.00 0.00 85394.74 13762.56 73876.95 00:06:38.633 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x0 length 0x20000 00:06:38.633 Nvme3n1 : 5.07 1528.49 5.97 0.00 0.00 82824.32 8638.84 73876.95 00:06:38.633 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:38.633 Verification LBA range: start 0x20000 length 0x20000 00:06:38.633 Nvme3n1 : 5.10 1482.10 5.79 0.00 0.00 85314.45 10545.34 74830.20 00:06:38.633 [2024-11-17T08:06:43.645Z] =================================================================================================================== 00:06:38.633 [2024-11-17T08:06:43.645Z] Total : 18032.86 70.44 0.00 0.00 84519.30 7983.48 74830.20 00:06:40.011 00:06:40.011 real 0m7.142s 00:06:40.011 user 0m13.233s 00:06:40.011 sys 0m0.250s 00:06:40.011 08:06:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.011 08:06:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:40.011 ************************************ 00:06:40.011 END TEST bdev_verify 00:06:40.011 ************************************ 00:06:40.011 08:06:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:40.011 08:06:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:40.011 08:06:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.011 08:06:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.011 ************************************ 00:06:40.011 START TEST bdev_verify_big_io 00:06:40.011 ************************************ 00:06:40.011 08:06:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:40.011 [2024-11-17 08:06:44.761640] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:40.011 [2024-11-17 08:06:44.761811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61284 ] 00:06:40.011 [2024-11-17 08:06:44.939711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.270 [2024-11-17 08:06:45.024317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.270 [2024-11-17 08:06:45.024329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.837 Running I/O for 5 seconds... 00:06:46.027 1730.00 IOPS, 108.12 MiB/s [2024-11-17T08:06:51.609Z] 2580.00 IOPS, 161.25 MiB/s [2024-11-17T08:06:51.609Z] 3284.00 IOPS, 205.25 MiB/s 00:06:46.597 Latency(us) 00:06:46.597 [2024-11-17T08:06:51.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.597 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.597 Verification LBA range: start 0x0 length 0xbd0b 00:06:46.597 Nvme0n1 : 5.51 139.40 8.71 0.00 0.00 894367.50 24784.52 880803.84 00:06:46.597 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.597 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:46.597 Nvme0n1 : 5.65 132.77 8.30 0.00 0.00 932567.59 31457.28 873177.83 00:06:46.597 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.597 Verification LBA range: start 0x0 length 0xa000 00:06:46.597 Nvme1n1 : 5.51 139.31 8.71 0.00 0.00 871342.70 66727.56 754974.72 00:06:46.597 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.597 Verification LBA range: start 0xa000 length 0xa000 00:06:46.597 Nvme1n1 : 5.65 131.67 8.23 0.00 0.00 913123.08 31457.28 815982.78 00:06:46.597 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.597 Verification LBA range: start 0x0 length 0x8000 00:06:46.597 Nvme2n1 : 5.62 140.46 8.78 0.00 0.00 834337.68 107240.73 758787.72 00:06:46.597 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.598 Verification LBA range: start 0x8000 length 0x8000 00:06:46.598 Nvme2n1 : 5.65 135.85 8.49 0.00 0.00 870696.96 74830.20 823608.79 00:06:46.598 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.598 Verification LBA range: start 0x0 length 0x8000 00:06:46.598 Nvme2n2 : 5.75 151.71 9.48 0.00 0.00 757130.32 41943.04 777852.74 00:06:46.598 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.598 Verification LBA range: start 0x8000 length 0x8000 00:06:46.598 Nvme2n2 : 5.66 135.78 8.49 0.00 0.00 846790.28 65774.31 846486.81 00:06:46.598 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.598 Verification LBA range: start 0x0 length 0x8000 00:06:46.598 Nvme2n3 : 5.76 155.65 9.73 0.00 0.00 720661.14 40513.16 796917.76 00:06:46.598 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.598 Verification LBA range: start 0x8000 length 0x8000 00:06:46.598 Nvme2n3 : 5.79 150.83 9.43 0.00 0.00 750443.37 10604.92 873177.83 00:06:46.598 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.598 Verification LBA range: start 0x0 length 0x2000 00:06:46.598 Nvme3n1 : 5.80 169.86 10.62 0.00 0.00 644031.66 5987.61 999006.95 00:06:46.598 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.598 Verification LBA range: start 0x2000 length 0x2000 00:06:46.598 Nvme3n1 : 5.80 154.55 9.66 0.00 0.00 712198.08 5302.46 896055.85 00:06:46.598 [2024-11-17T08:06:51.610Z] =================================================================================================================== 00:06:46.598 [2024-11-17T08:06:51.610Z] Total : 1737.83 108.61 0.00 0.00 804462.25 5302.46 999006.95 00:06:48.016 00:06:48.016 real 0m8.265s 00:06:48.016 user 0m15.467s 00:06:48.016 sys 0m0.286s 00:06:48.016 08:06:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.016 ************************************ 00:06:48.016 END TEST bdev_verify_big_io 00:06:48.016 ************************************ 00:06:48.016 08:06:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:48.016 08:06:52 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.016 08:06:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:48.016 08:06:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.016 08:06:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.016 ************************************ 00:06:48.016 START TEST bdev_write_zeroes 00:06:48.016 ************************************ 00:06:48.016 08:06:52 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:48.304 [2024-11-17 08:06:53.056505] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:48.304 [2024-11-17 08:06:53.056645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61388 ] 00:06:48.304 [2024-11-17 08:06:53.220496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.304 [2024-11-17 08:06:53.301664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.872 Running I/O for 1 seconds... 00:06:50.247 54528.00 IOPS, 213.00 MiB/s 00:06:50.247 Latency(us) 00:06:50.247 [2024-11-17T08:06:55.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.247 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.247 Nvme0n1 : 1.03 9017.64 35.23 0.00 0.00 14161.04 11498.59 28597.53 00:06:50.247 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.247 Nvme1n1 : 1.03 9003.95 35.17 0.00 0.00 14161.13 12034.79 27882.59 00:06:50.247 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.247 Nvme2n1 : 1.03 8990.54 35.12 0.00 0.00 14125.53 11617.75 27048.49 00:06:50.247 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.247 Nvme2n2 : 1.03 8977.31 35.07 0.00 0.00 14072.58 9651.67 26214.40 00:06:50.247 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.247 Nvme2n3 : 1.04 8963.95 35.02 0.00 0.00 14049.92 8221.79 26810.18 00:06:50.247 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:50.247 Nvme3n1 : 1.04 8950.81 34.96 0.00 0.00 14035.38 7923.90 28955.00 00:06:50.247 [2024-11-17T08:06:55.259Z] =================================================================================================================== 00:06:50.247 [2024-11-17T08:06:55.259Z] Total : 53904.20 210.56 0.00 0.00 14100.93 7923.90 28955.00 00:06:51.184 00:06:51.184 real 0m2.910s 00:06:51.184 user 0m2.602s 00:06:51.184 sys 0m0.187s 00:06:51.184 08:06:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.184 ************************************ 00:06:51.184 END TEST bdev_write_zeroes 00:06:51.184 ************************************ 00:06:51.184 08:06:55 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:51.184 08:06:55 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.184 08:06:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.184 08:06:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.184 08:06:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.184 ************************************ 00:06:51.184 START TEST bdev_json_nonenclosed 00:06:51.184 ************************************ 00:06:51.184 08:06:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.184 [2024-11-17 08:06:56.045590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:51.184 [2024-11-17 08:06:56.045997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61442 ] 00:06:51.443 [2024-11-17 08:06:56.225488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.443 [2024-11-17 08:06:56.307583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.443 [2024-11-17 08:06:56.307718] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:51.443 [2024-11-17 08:06:56.307746] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:51.443 [2024-11-17 08:06:56.307758] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.703 ************************************ 00:06:51.703 END TEST bdev_json_nonenclosed 00:06:51.703 ************************************ 00:06:51.703 00:06:51.703 real 0m0.576s 00:06:51.703 user 0m0.363s 00:06:51.703 sys 0m0.107s 00:06:51.703 08:06:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.703 08:06:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:51.703 08:06:56 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.703 08:06:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:51.703 08:06:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.703 08:06:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.703 ************************************ 00:06:51.703 START TEST bdev_json_nonarray 00:06:51.703 ************************************ 00:06:51.703 08:06:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:51.703 [2024-11-17 08:06:56.649055] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:51.703 [2024-11-17 08:06:56.649203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61467 ] 00:06:51.963 [2024-11-17 08:06:56.812729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.963 [2024-11-17 08:06:56.893429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.963 [2024-11-17 08:06:56.893551] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:51.963 [2024-11-17 08:06:56.893577] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:51.963 [2024-11-17 08:06:56.893590] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.222 00:06:52.222 real 0m0.516s 00:06:52.222 user 0m0.301s 00:06:52.223 sys 0m0.112s 00:06:52.223 ************************************ 00:06:52.223 END TEST bdev_json_nonarray 00:06:52.223 ************************************ 00:06:52.223 08:06:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.223 08:06:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:52.223 08:06:57 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:52.223 00:06:52.223 real 0m39.221s 00:06:52.223 user 1m0.218s 00:06:52.223 sys 0m5.942s 00:06:52.223 08:06:57 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.223 08:06:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:52.223 ************************************ 00:06:52.223 END TEST blockdev_nvme 00:06:52.223 ************************************ 00:06:52.223 08:06:57 -- spdk/autotest.sh@209 -- # uname -s 00:06:52.223 08:06:57 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:52.223 08:06:57 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:52.223 08:06:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.223 08:06:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.223 08:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:52.223 ************************************ 00:06:52.223 START TEST blockdev_nvme_gpt 00:06:52.223 ************************************ 00:06:52.223 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:52.483 * Looking for test storage... 00:06:52.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.483 08:06:57 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.483 --rc genhtml_branch_coverage=1 00:06:52.483 --rc genhtml_function_coverage=1 00:06:52.483 --rc genhtml_legend=1 00:06:52.483 --rc geninfo_all_blocks=1 00:06:52.483 --rc geninfo_unexecuted_blocks=1 00:06:52.483 00:06:52.483 ' 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.483 --rc genhtml_branch_coverage=1 00:06:52.483 --rc genhtml_function_coverage=1 00:06:52.483 --rc genhtml_legend=1 00:06:52.483 --rc geninfo_all_blocks=1 00:06:52.483 --rc geninfo_unexecuted_blocks=1 00:06:52.483 00:06:52.483 ' 00:06:52.483 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.483 --rc genhtml_branch_coverage=1 00:06:52.483 --rc genhtml_function_coverage=1 00:06:52.484 --rc genhtml_legend=1 00:06:52.484 --rc geninfo_all_blocks=1 00:06:52.484 --rc geninfo_unexecuted_blocks=1 00:06:52.484 00:06:52.484 ' 00:06:52.484 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.484 --rc genhtml_branch_coverage=1 00:06:52.484 --rc genhtml_function_coverage=1 00:06:52.484 --rc genhtml_legend=1 00:06:52.484 --rc geninfo_all_blocks=1 00:06:52.484 --rc geninfo_unexecuted_blocks=1 00:06:52.484 00:06:52.484 ' 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:52.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61546 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61546 00:06:52.484 08:06:57 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:52.484 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61546 ']' 00:06:52.484 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.484 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.484 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.484 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.484 08:06:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.743 [2024-11-17 08:06:57.513816] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:52.743 [2024-11-17 08:06:57.514226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61546 ] 00:06:52.743 [2024-11-17 08:06:57.694342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.002 [2024-11-17 08:06:57.783399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.570 08:06:58 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.570 08:06:58 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:53.571 08:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:53.571 08:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:53.571 08:06:58 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:53.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.088 Waiting for block devices as requested 00:06:54.088 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:54.347 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:54.347 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:54.347 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:59.619 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:59.619 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:59.619 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:59.620 BYT; 00:06:59.620 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:59.620 BYT; 00:06:59.620 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:59.620 08:07:04 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:59.620 08:07:04 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:00.555 The operation has completed successfully. 00:07:00.555 08:07:05 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:01.932 The operation has completed successfully. 00:07:01.932 08:07:06 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:02.191 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.758 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.758 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.758 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.758 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:02.758 08:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:02.758 08:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.758 08:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.758 [] 00:07:02.758 08:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.758 08:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:02.758 08:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:02.758 08:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:02.758 08:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:03.017 08:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:03.017 08:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.017 08:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:03.277 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:03.277 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:03.278 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8b07d3b7-f807-4f83-b48e-5d1f1bc23580"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8b07d3b7-f807-4f83-b48e-5d1f1bc23580",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b1f13dc9-3511-4ac5-b3f8-cb7587525d73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b1f13dc9-3511-4ac5-b3f8-cb7587525d73",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3dd1d85a-0c7b-41f6-8610-f2c404d8ff6a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3dd1d85a-0c7b-41f6-8610-f2c404d8ff6a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6636986a-9fc7-4aba-afd4-5bf4dbf437f0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6636986a-9fc7-4aba-afd4-5bf4dbf437f0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6c45df3b-c8c8-4c90-80d7-362878970329"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6c45df3b-c8c8-4c90-80d7-362878970329",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:03.278 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:03.278 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:03.278 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:03.278 08:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61546 00:07:03.278 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61546 ']' 00:07:03.278 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61546 00:07:03.278 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:03.278 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.278 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61546 00:07:03.538 killing process with pid 61546 00:07:03.538 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.538 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.538 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61546' 00:07:03.538 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61546 00:07:03.538 08:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61546 00:07:04.916 08:07:09 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:04.916 08:07:09 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:04.916 08:07:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:04.916 08:07:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.916 08:07:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:04.916 ************************************ 00:07:04.916 START TEST bdev_hello_world 00:07:04.916 ************************************ 00:07:04.916 08:07:09 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.175 [2024-11-17 08:07:10.000391] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:05.175 [2024-11-17 08:07:10.000532] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62172 ] 00:07:05.175 [2024-11-17 08:07:10.167537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.434 [2024-11-17 08:07:10.248771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.003 [2024-11-17 08:07:10.791856] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:06.003 [2024-11-17 08:07:10.791908] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:06.003 [2024-11-17 08:07:10.791947] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:06.003 [2024-11-17 08:07:10.794625] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:06.003 [2024-11-17 08:07:10.795259] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:06.003 [2024-11-17 08:07:10.795306] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:06.003 [2024-11-17 08:07:10.795618] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:06.003 00:07:06.003 [2024-11-17 08:07:10.795699] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:06.571 00:07:06.571 real 0m1.638s 00:07:06.571 user 0m1.344s 00:07:06.571 sys 0m0.187s 00:07:06.571 08:07:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.571 ************************************ 00:07:06.571 END TEST bdev_hello_world 00:07:06.571 ************************************ 00:07:06.571 08:07:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:06.830 08:07:11 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:06.830 08:07:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.830 08:07:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.830 08:07:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.830 ************************************ 00:07:06.830 START TEST bdev_bounds 00:07:06.830 ************************************ 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:06.830 Process bdevio pid: 62214 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62214 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62214' 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62214 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62214 ']' 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.830 08:07:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:06.830 [2024-11-17 08:07:11.716147] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:06.830 [2024-11-17 08:07:11.716324] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62214 ] 00:07:07.089 [2024-11-17 08:07:11.890083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.089 [2024-11-17 08:07:11.972945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.089 [2024-11-17 08:07:11.973053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.089 [2024-11-17 08:07:11.973068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.028 08:07:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.028 08:07:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:08.028 08:07:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:08.028 I/O targets: 00:07:08.028 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:08.028 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:08.028 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:08.028 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.028 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.028 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.028 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:08.028 00:07:08.028 00:07:08.028 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.028 http://cunit.sourceforge.net/ 00:07:08.028 00:07:08.028 00:07:08.028 Suite: bdevio tests on: Nvme3n1 00:07:08.028 Test: blockdev write read block ...passed 00:07:08.028 Test: blockdev write zeroes read block ...passed 00:07:08.028 Test: blockdev write zeroes read no split ...passed 00:07:08.028 Test: blockdev write zeroes read split ...passed 00:07:08.028 Test: blockdev write zeroes read split partial ...passed 00:07:08.028 Test: blockdev reset ...[2024-11-17 08:07:12.871888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:08.028 passed 00:07:08.028 Test: blockdev write read 8 blocks ...[2024-11-17 08:07:12.875428] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:08.028 passed 00:07:08.028 Test: blockdev write read size > 128k ...passed 00:07:08.028 Test: blockdev write read invalid size ...passed 00:07:08.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.028 Test: blockdev write read max offset ...passed 00:07:08.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.028 Test: blockdev writev readv 8 blocks ...passed 00:07:08.028 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.028 Test: blockdev writev readv block ...passed 00:07:08.028 Test: blockdev writev readv size > 128k ...passed 00:07:08.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.028 Test: blockdev comparev and writev ...[2024-11-17 08:07:12.884014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf004000 len:0x1000 00:07:08.028 [2024-11-17 08:07:12.884072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.028 passed 00:07:08.028 Test: blockdev nvme passthru rw ...passed 00:07:08.028 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.028 Test: blockdev nvme admin passthru ...[2024-11-17 08:07:12.885242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.028 [2024-11-17 08:07:12.885305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.028 passed 00:07:08.028 Test: blockdev copy ...passed 00:07:08.028 Suite: bdevio tests on: Nvme2n3 00:07:08.028 Test: blockdev write read block ...passed 00:07:08.028 Test: blockdev write zeroes read block ...passed 00:07:08.028 Test: blockdev write zeroes read no split ...passed 00:07:08.028 Test: blockdev write zeroes read split ...passed 00:07:08.028 Test: blockdev write zeroes read split partial ...passed 00:07:08.028 Test: blockdev reset ...[2024-11-17 08:07:12.942483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:08.028 [2024-11-17 08:07:12.946409] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:08.028 passed 00:07:08.028 Test: blockdev write read 8 blocks ...passed 00:07:08.028 Test: blockdev write read size > 128k ...passed 00:07:08.028 Test: blockdev write read invalid size ...passed 00:07:08.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.028 Test: blockdev write read max offset ...passed 00:07:08.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.028 Test: blockdev writev readv 8 blocks ...passed 00:07:08.028 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.028 Test: blockdev writev readv block ...passed 00:07:08.028 Test: blockdev writev readv size > 128k ...passed 00:07:08.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.028 Test: blockdev comparev and writev ...[2024-11-17 08:07:12.955286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf002000 len:0x1000 00:07:08.028 [2024-11-17 08:07:12.955339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.028 passed 00:07:08.028 Test: blockdev nvme passthru rw ...passed 00:07:08.028 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.028 Test: blockdev nvme admin passthru ...[2024-11-17 08:07:12.956244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.028 [2024-11-17 08:07:12.956289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.028 passed 00:07:08.028 Test: blockdev copy ...passed 00:07:08.028 Suite: bdevio tests on: Nvme2n2 00:07:08.028 Test: blockdev write read block ...passed 00:07:08.028 Test: blockdev write zeroes read block ...passed 00:07:08.028 Test: blockdev write zeroes read no split ...passed 00:07:08.028 Test: blockdev write zeroes read split ...passed 00:07:08.028 Test: blockdev write zeroes read split partial ...passed 00:07:08.028 Test: blockdev reset ...[2024-11-17 08:07:13.013195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:08.028 [2024-11-17 08:07:13.017329] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:08.028 passed 00:07:08.028 Test: blockdev write read 8 blocks ...passed 00:07:08.028 Test: blockdev write read size > 128k ...passed 00:07:08.028 Test: blockdev write read invalid size ...passed 00:07:08.028 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.028 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.028 Test: blockdev write read max offset ...passed 00:07:08.028 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.028 Test: blockdev writev readv 8 blocks ...passed 00:07:08.028 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.028 Test: blockdev writev readv block ...passed 00:07:08.028 Test: blockdev writev readv size > 128k ...passed 00:07:08.028 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.028 Test: blockdev comparev and writev ...[2024-11-17 08:07:13.026392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e7038000 len:0x1000 00:07:08.028 [2024-11-17 08:07:13.026459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.028 passed 00:07:08.028 Test: blockdev nvme passthru rw ...passed 00:07:08.028 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.028 Test: blockdev nvme admin passthru ...[2024-11-17 08:07:13.027286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.028 [2024-11-17 08:07:13.027334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.028 passed 00:07:08.028 Test: blockdev copy ...passed 00:07:08.028 Suite: bdevio tests on: Nvme2n1 00:07:08.028 Test: blockdev write read block ...passed 00:07:08.028 Test: blockdev write zeroes read block ...passed 00:07:08.288 Test: blockdev write zeroes read no split ...passed 00:07:08.288 Test: blockdev write zeroes read split ...passed 00:07:08.288 Test: blockdev write zeroes read split partial ...passed 00:07:08.288 Test: blockdev reset ...[2024-11-17 08:07:13.095291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:08.288 [2024-11-17 08:07:13.099131] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:08.288 passed 00:07:08.288 Test: blockdev write read 8 blocks ...passed 00:07:08.288 Test: blockdev write read size > 128k ...passed 00:07:08.288 Test: blockdev write read invalid size ...passed 00:07:08.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.288 Test: blockdev write read max offset ...passed 00:07:08.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.288 Test: blockdev writev readv 8 blocks ...passed 00:07:08.288 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.288 Test: blockdev writev readv block ...passed 00:07:08.288 Test: blockdev writev readv size > 128k ...passed 00:07:08.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.288 Test: blockdev comparev and writev ...[2024-11-17 08:07:13.110293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e7034000 len:0x1000 00:07:08.288 [2024-11-17 08:07:13.110346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.288 passed 00:07:08.288 Test: blockdev nvme passthru rw ...passed 00:07:08.288 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:07:13.111323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.288 [2024-11-17 08:07:13.111361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.288 passed 00:07:08.288 Test: blockdev nvme admin passthru ...passed 00:07:08.288 Test: blockdev copy ...passed 00:07:08.288 Suite: bdevio tests on: Nvme1n1p2 00:07:08.288 Test: blockdev write read block ...passed 00:07:08.288 Test: blockdev write zeroes read block ...passed 00:07:08.288 Test: blockdev write zeroes read no split ...passed 00:07:08.288 Test: blockdev write zeroes read split ...passed 00:07:08.288 Test: blockdev write zeroes read split partial ...passed 00:07:08.288 Test: blockdev reset ...[2024-11-17 08:07:13.183178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:08.288 [2024-11-17 08:07:13.186611] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:08.288 passed 00:07:08.288 Test: blockdev write read 8 blocks ...passed 00:07:08.288 Test: blockdev write read size > 128k ...passed 00:07:08.288 Test: blockdev write read invalid size ...passed 00:07:08.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.288 Test: blockdev write read max offset ...passed 00:07:08.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.288 Test: blockdev writev readv 8 blocks ...passed 00:07:08.288 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.288 Test: blockdev writev readv block ...passed 00:07:08.288 Test: blockdev writev readv size > 128k ...passed 00:07:08.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.288 Test: blockdev comparev and writev ...[2024-11-17 08:07:13.196167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:07:08.288 Test: blockdev nvme passthru rw ...passed 00:07:08.288 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.288 Test: blockdev nvme admin passthru ...passed 00:07:08.289 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2e7030000 len:0x1000 00:07:08.289 [2024-11-17 08:07:13.196364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.289 passed 00:07:08.289 Suite: bdevio tests on: Nvme1n1p1 00:07:08.289 Test: blockdev write read block ...passed 00:07:08.289 Test: blockdev write zeroes read block ...passed 00:07:08.289 Test: blockdev write zeroes read no split ...passed 00:07:08.289 Test: blockdev write zeroes read split ...passed 00:07:08.289 Test: blockdev write zeroes read split partial ...passed 00:07:08.289 Test: blockdev reset ...[2024-11-17 08:07:13.261285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:08.289 [2024-11-17 08:07:13.265082] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:08.289 passed 00:07:08.289 Test: blockdev write read 8 blocks ...passed 00:07:08.289 Test: blockdev write read size > 128k ...passed 00:07:08.289 Test: blockdev write read invalid size ...passed 00:07:08.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.289 Test: blockdev write read max offset ...passed 00:07:08.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.289 Test: blockdev writev readv 8 blocks ...passed 00:07:08.289 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.289 Test: blockdev writev readv block ...passed 00:07:08.289 Test: blockdev writev readv size > 128k ...passed 00:07:08.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.289 Test: blockdev comparev and writev ...[2024-11-17 08:07:13.274591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2cfa0e000 len:0x1000 00:07:08.289 [2024-11-17 08:07:13.274658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.289 passed 00:07:08.289 Test: blockdev nvme passthru rw ...passed 00:07:08.289 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.289 Test: blockdev nvme admin passthru ...passed 00:07:08.289 Test: blockdev copy ...passed 00:07:08.289 Suite: bdevio tests on: Nvme0n1 00:07:08.289 Test: blockdev write read block ...passed 00:07:08.289 Test: blockdev write zeroes read block ...passed 00:07:08.289 Test: blockdev write zeroes read no split ...passed 00:07:08.548 Test: blockdev write zeroes read split ...passed 00:07:08.548 Test: blockdev write zeroes read split partial ...passed 00:07:08.548 Test: blockdev reset ...[2024-11-17 08:07:13.324676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:08.548 passed 00:07:08.548 Test: blockdev write read 8 blocks ...[2024-11-17 08:07:13.328205] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:08.548 passed 00:07:08.548 Test: blockdev write read size > 128k ...passed 00:07:08.548 Test: blockdev write read invalid size ...passed 00:07:08.548 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.548 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.548 Test: blockdev write read max offset ...passed 00:07:08.548 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.548 Test: blockdev writev readv 8 blocks ...passed 00:07:08.548 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.548 Test: blockdev writev readv block ...passed 00:07:08.548 Test: blockdev writev readv size > 128k ...passed 00:07:08.548 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.548 Test: blockdev comparev and writev ...passed 00:07:08.548 Test: blockdev nvme passthru rw ...[2024-11-17 08:07:13.335609] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:08.548 separate metadata which is not supported yet. 00:07:08.548 passed 00:07:08.548 Test: blockdev nvme passthru vendor specific ...[2024-11-17 08:07:13.336257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:08.548 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:08.549 [2024-11-17 08:07:13.336425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:08.549 passed 00:07:08.549 Test: blockdev copy ...passed 00:07:08.549 00:07:08.549 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.549 suites 7 7 n/a 0 0 00:07:08.549 tests 161 161 161 0 0 00:07:08.549 asserts 1025 1025 1025 0 n/a 00:07:08.549 00:07:08.549 Elapsed time = 1.400 seconds 00:07:08.549 0 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62214 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62214 ']' 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62214 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62214 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62214' 00:07:08.549 killing process with pid 62214 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62214 00:07:08.549 08:07:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62214 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:09.487 00:07:09.487 real 0m2.539s 00:07:09.487 user 0m6.677s 00:07:09.487 sys 0m0.333s 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.487 ************************************ 00:07:09.487 END TEST bdev_bounds 00:07:09.487 ************************************ 00:07:09.487 08:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.487 08:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.487 08:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.487 08:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.487 ************************************ 00:07:09.487 START TEST bdev_nbd 00:07:09.487 ************************************ 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:09.487 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62274 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62274 /var/tmp/spdk-nbd.sock 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62274 ']' 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.488 08:07:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.488 [2024-11-17 08:07:14.300661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:09.488 [2024-11-17 08:07:14.301019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.488 [2024-11-17 08:07:14.461021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.747 [2024-11-17 08:07:14.541455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:10.316 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.575 1+0 records in 00:07:10.575 1+0 records out 00:07:10.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579096 s, 7.1 MB/s 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:10.575 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.142 1+0 records in 00:07:11.142 1+0 records out 00:07:11.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626503 s, 6.5 MB/s 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.142 08:07:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.401 1+0 records in 00:07:11.401 1+0 records out 00:07:11.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763439 s, 5.4 MB/s 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.401 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:11.660 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:11.660 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:11.660 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.661 1+0 records in 00:07:11.661 1+0 records out 00:07:11.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687206 s, 6.0 MB/s 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.661 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.920 1+0 records in 00:07:11.920 1+0 records out 00:07:11.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682031 s, 6.0 MB/s 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:11.920 08:07:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.179 1+0 records in 00:07:12.179 1+0 records out 00:07:12.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729849 s, 5.6 MB/s 00:07:12.179 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.180 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.180 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.180 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.180 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.180 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.180 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.180 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.439 1+0 records in 00:07:12.439 1+0 records out 00:07:12.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118033 s, 3.5 MB/s 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:12.439 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd0", 00:07:13.007 "bdev_name": "Nvme0n1" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd1", 00:07:13.007 "bdev_name": "Nvme1n1p1" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd2", 00:07:13.007 "bdev_name": "Nvme1n1p2" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd3", 00:07:13.007 "bdev_name": "Nvme2n1" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd4", 00:07:13.007 "bdev_name": "Nvme2n2" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd5", 00:07:13.007 "bdev_name": "Nvme2n3" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd6", 00:07:13.007 "bdev_name": "Nvme3n1" 00:07:13.007 } 00:07:13.007 ]' 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd0", 00:07:13.007 "bdev_name": "Nvme0n1" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd1", 00:07:13.007 "bdev_name": "Nvme1n1p1" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd2", 00:07:13.007 "bdev_name": "Nvme1n1p2" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd3", 00:07:13.007 "bdev_name": "Nvme2n1" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd4", 00:07:13.007 "bdev_name": "Nvme2n2" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd5", 00:07:13.007 "bdev_name": "Nvme2n3" 00:07:13.007 }, 00:07:13.007 { 00:07:13.007 "nbd_device": "/dev/nbd6", 00:07:13.007 "bdev_name": "Nvme3n1" 00:07:13.007 } 00:07:13.007 ]' 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.007 08:07:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.267 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.526 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.527 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.786 08:07:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.046 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.305 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.564 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.823 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:14.824 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.824 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.083 08:07:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:15.083 /dev/nbd0 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.083 1+0 records in 00:07:15.083 1+0 records out 00:07:15.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579589 s, 7.1 MB/s 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.083 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:15.343 /dev/nbd1 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.343 1+0 records in 00:07:15.343 1+0 records out 00:07:15.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576719 s, 7.1 MB/s 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.343 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:15.911 /dev/nbd10 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.911 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.912 1+0 records in 00:07:15.912 1+0 records out 00:07:15.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685606 s, 6.0 MB/s 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:15.912 /dev/nbd11 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.912 1+0 records in 00:07:15.912 1+0 records out 00:07:15.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059421 s, 6.9 MB/s 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:15.912 08:07:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:16.171 /dev/nbd12 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.171 1+0 records in 00:07:16.171 1+0 records out 00:07:16.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770831 s, 5.3 MB/s 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.171 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:16.431 /dev/nbd13 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.431 1+0 records in 00:07:16.431 1+0 records out 00:07:16.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110316 s, 3.7 MB/s 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.431 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:16.690 /dev/nbd14 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.690 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.950 1+0 records in 00:07:16.950 1+0 records out 00:07:16.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997674 s, 4.1 MB/s 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd0", 00:07:16.950 "bdev_name": "Nvme0n1" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd1", 00:07:16.950 "bdev_name": "Nvme1n1p1" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd10", 00:07:16.950 "bdev_name": "Nvme1n1p2" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd11", 00:07:16.950 "bdev_name": "Nvme2n1" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd12", 00:07:16.950 "bdev_name": "Nvme2n2" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd13", 00:07:16.950 "bdev_name": "Nvme2n3" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd14", 00:07:16.950 "bdev_name": "Nvme3n1" 00:07:16.950 } 00:07:16.950 ]' 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd0", 00:07:16.950 "bdev_name": "Nvme0n1" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd1", 00:07:16.950 "bdev_name": "Nvme1n1p1" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd10", 00:07:16.950 "bdev_name": "Nvme1n1p2" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd11", 00:07:16.950 "bdev_name": "Nvme2n1" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd12", 00:07:16.950 "bdev_name": "Nvme2n2" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd13", 00:07:16.950 "bdev_name": "Nvme2n3" 00:07:16.950 }, 00:07:16.950 { 00:07:16.950 "nbd_device": "/dev/nbd14", 00:07:16.950 "bdev_name": "Nvme3n1" 00:07:16.950 } 00:07:16.950 ]' 00:07:16.950 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.209 /dev/nbd1 00:07:17.209 /dev/nbd10 00:07:17.209 /dev/nbd11 00:07:17.209 /dev/nbd12 00:07:17.209 /dev/nbd13 00:07:17.209 /dev/nbd14' 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.209 /dev/nbd1 00:07:17.209 /dev/nbd10 00:07:17.209 /dev/nbd11 00:07:17.209 /dev/nbd12 00:07:17.209 /dev/nbd13 00:07:17.209 /dev/nbd14' 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:17.209 256+0 records in 00:07:17.209 256+0 records out 00:07:17.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010646 s, 98.5 MB/s 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.209 08:07:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.209 256+0 records in 00:07:17.209 256+0 records out 00:07:17.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181708 s, 5.8 MB/s 00:07:17.209 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.209 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.473 256+0 records in 00:07:17.473 256+0 records out 00:07:17.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188944 s, 5.5 MB/s 00:07:17.473 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.473 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:17.753 256+0 records in 00:07:17.753 256+0 records out 00:07:17.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169009 s, 6.2 MB/s 00:07:17.753 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.753 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:17.753 256+0 records in 00:07:17.753 256+0 records out 00:07:17.753 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185808 s, 5.6 MB/s 00:07:17.753 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.753 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:18.031 256+0 records in 00:07:18.031 256+0 records out 00:07:18.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183317 s, 5.7 MB/s 00:07:18.031 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.031 08:07:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:18.297 256+0 records in 00:07:18.297 256+0 records out 00:07:18.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185846 s, 5.6 MB/s 00:07:18.297 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.297 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:18.298 256+0 records in 00:07:18.298 256+0 records out 00:07:18.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149579 s, 7.0 MB/s 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.298 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.557 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.817 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.076 08:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.335 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.595 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.854 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.855 08:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.113 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.372 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:20.631 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:20.890 malloc_lvol_verify 00:07:20.890 08:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:21.149 a134c47d-e767-4c9e-ae41-c65f8cc61960 00:07:21.149 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:21.408 bdc1e745-aec2-4b02-88a7-5d96fedeccf8 00:07:21.408 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:21.667 /dev/nbd0 00:07:21.667 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:21.667 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:21.667 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:21.667 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:21.667 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:21.668 mke2fs 1.47.0 (5-Feb-2023) 00:07:21.668 Discarding device blocks: 0/4096 done 00:07:21.668 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:21.668 00:07:21.668 Allocating group tables: 0/1 done 00:07:21.668 Writing inode tables: 0/1 done 00:07:21.668 Creating journal (1024 blocks): done 00:07:21.668 Writing superblocks and filesystem accounting information: 0/1 done 00:07:21.668 00:07:21.668 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:21.668 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.668 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:21.668 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.668 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.668 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.668 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62274 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62274 ']' 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62274 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62274 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.927 killing process with pid 62274 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62274' 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62274 00:07:21.927 08:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62274 00:07:22.865 08:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:22.865 00:07:22.865 real 0m13.504s 00:07:22.865 user 0m19.376s 00:07:22.865 sys 0m4.309s 00:07:22.865 08:07:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.865 ************************************ 00:07:22.865 END TEST bdev_nbd 00:07:22.865 ************************************ 00:07:22.865 08:07:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:22.865 08:07:27 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:22.865 08:07:27 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:22.865 08:07:27 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:22.865 skipping fio tests on NVMe due to multi-ns failures. 00:07:22.865 08:07:27 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:22.865 08:07:27 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:22.865 08:07:27 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.865 08:07:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:22.865 08:07:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.865 08:07:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.865 ************************************ 00:07:22.865 START TEST bdev_verify 00:07:22.865 ************************************ 00:07:22.865 08:07:27 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.865 [2024-11-17 08:07:27.864445] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:22.866 [2024-11-17 08:07:27.864614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62712 ] 00:07:23.124 [2024-11-17 08:07:28.042920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.384 [2024-11-17 08:07:28.137498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.384 [2024-11-17 08:07:28.137505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.951 Running I/O for 5 seconds... 00:07:26.264 20416.00 IOPS, 79.75 MiB/s [2024-11-17T08:07:32.212Z] 19392.00 IOPS, 75.75 MiB/s [2024-11-17T08:07:33.146Z] 18688.00 IOPS, 73.00 MiB/s [2024-11-17T08:07:34.083Z] 18640.00 IOPS, 72.81 MiB/s [2024-11-17T08:07:34.084Z] 19225.60 IOPS, 75.10 MiB/s 00:07:29.072 Latency(us) 00:07:29.072 [2024-11-17T08:07:34.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.072 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x0 length 0xbd0bd 00:07:29.072 Nvme0n1 : 5.05 1369.94 5.35 0.00 0.00 93097.33 21090.68 88652.33 00:07:29.072 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:29.072 Nvme0n1 : 5.04 1319.75 5.16 0.00 0.00 96643.63 23950.43 87699.08 00:07:29.072 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x0 length 0x4ff80 00:07:29.072 Nvme1n1p1 : 5.05 1369.34 5.35 0.00 0.00 92902.93 24665.37 80073.08 00:07:29.072 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:29.072 Nvme1n1p1 : 5.04 1319.36 5.15 0.00 0.00 96491.83 25976.09 81979.58 00:07:29.072 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x0 length 0x4ff7f 00:07:29.072 Nvme1n1p2 : 5.08 1374.55 5.37 0.00 0.00 92302.26 9175.04 79596.45 00:07:29.072 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:29.072 Nvme1n1p2 : 5.07 1325.11 5.18 0.00 0.00 95838.17 8460.10 81026.33 00:07:29.072 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x0 length 0x80000 00:07:29.072 Nvme2n1 : 5.09 1383.58 5.40 0.00 0.00 91739.65 9949.56 77213.32 00:07:29.072 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x80000 length 0x80000 00:07:29.072 Nvme2n1 : 5.08 1334.25 5.21 0.00 0.00 95194.21 10783.65 78643.20 00:07:29.072 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x0 length 0x80000 00:07:29.072 Nvme2n2 : 5.09 1383.04 5.40 0.00 0.00 91573.20 10604.92 78643.20 00:07:29.072 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x80000 length 0x80000 00:07:29.072 Nvme2n2 : 5.09 1333.88 5.21 0.00 0.00 95025.73 10604.92 78643.20 00:07:29.072 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x0 length 0x80000 00:07:29.072 Nvme2n3 : 5.09 1382.72 5.40 0.00 0.00 91406.98 10604.92 80549.70 00:07:29.072 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x80000 length 0x80000 00:07:29.072 Nvme2n3 : 5.09 1333.48 5.21 0.00 0.00 94855.81 10902.81 81979.58 00:07:29.072 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x0 length 0x20000 00:07:29.072 Nvme3n1 : 5.09 1382.42 5.40 0.00 0.00 91256.48 10426.18 82456.20 00:07:29.072 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.072 Verification LBA range: start 0x20000 length 0x20000 00:07:29.072 Nvme3n1 : 5.09 1333.17 5.21 0.00 0.00 94691.72 10843.23 84839.33 00:07:29.072 [2024-11-17T08:07:34.084Z] =================================================================================================================== 00:07:29.072 [2024-11-17T08:07:34.084Z] Total : 18944.59 74.00 0.00 0.00 93748.88 8460.10 88652.33 00:07:30.008 00:07:30.008 real 0m7.076s 00:07:30.008 user 0m13.132s 00:07:30.008 sys 0m0.232s 00:07:30.008 08:07:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.008 ************************************ 00:07:30.008 END TEST bdev_verify 00:07:30.008 08:07:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:30.008 ************************************ 00:07:30.008 08:07:34 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.008 08:07:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:30.008 08:07:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.008 08:07:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.008 ************************************ 00:07:30.008 START TEST bdev_verify_big_io 00:07:30.008 ************************************ 00:07:30.008 08:07:34 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.008 [2024-11-17 08:07:34.998517] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:30.008 [2024-11-17 08:07:34.998691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62810 ] 00:07:30.268 [2024-11-17 08:07:35.185340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.528 [2024-11-17 08:07:35.311876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.528 [2024-11-17 08:07:35.311889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.097 Running I/O for 5 seconds... 00:07:36.289 1638.00 IOPS, 102.38 MiB/s [2024-11-17T08:07:41.869Z] 2511.50 IOPS, 156.97 MiB/s [2024-11-17T08:07:42.129Z] 2611.67 IOPS, 163.23 MiB/s 00:07:37.117 Latency(us) 00:07:37.117 [2024-11-17T08:07:42.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.117 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x0 length 0xbd0b 00:07:37.117 Nvme0n1 : 5.68 115.54 7.22 0.00 0.00 1068096.85 19065.02 1578583.51 00:07:37.117 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:37.117 Nvme0n1 : 5.72 123.09 7.69 0.00 0.00 1007561.67 21328.99 999006.95 00:07:37.117 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x0 length 0x4ff8 00:07:37.117 Nvme1n1p1 : 5.78 114.62 7.16 0.00 0.00 1036956.55 36700.16 1593835.52 00:07:37.117 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:37.117 Nvme1n1p1 : 5.72 121.63 7.60 0.00 0.00 997007.49 40989.79 1006632.96 00:07:37.117 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x0 length 0x4ff7 00:07:37.117 Nvme1n1p2 : 5.79 123.20 7.70 0.00 0.00 951733.80 55765.18 1128649.08 00:07:37.117 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:37.117 Nvme1n1p2 : 5.84 120.58 7.54 0.00 0.00 982047.82 107717.35 1060015.01 00:07:37.117 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x0 length 0x8000 00:07:37.117 Nvme2n1 : 5.85 122.84 7.68 0.00 0.00 937005.81 54335.30 1662469.59 00:07:37.117 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x8000 length 0x8000 00:07:37.117 Nvme2n1 : 5.84 122.43 7.65 0.00 0.00 941778.57 123922.62 1082893.03 00:07:37.117 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x0 length 0x8000 00:07:37.117 Nvme2n2 : 5.91 134.01 8.38 0.00 0.00 837228.87 45756.04 1204909.15 00:07:37.117 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x8000 length 0x8000 00:07:37.117 Nvme2n2 : 5.84 131.45 8.22 0.00 0.00 860858.34 51475.55 876990.84 00:07:37.117 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x0 length 0x8000 00:07:37.117 Nvme2n3 : 5.91 132.10 8.26 0.00 0.00 828119.03 16443.58 1738729.66 00:07:37.117 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x8000 length 0x8000 00:07:37.117 Nvme2n3 : 5.89 135.24 8.45 0.00 0.00 814704.05 46470.98 903681.86 00:07:37.117 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x0 length 0x2000 00:07:37.117 Nvme3n1 : 5.94 148.19 9.26 0.00 0.00 721260.61 6970.65 1769233.69 00:07:37.117 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.117 Verification LBA range: start 0x2000 length 0x2000 00:07:37.117 Nvme3n1 : 5.91 147.47 9.22 0.00 0.00 731979.56 6494.02 926559.88 00:07:37.117 [2024-11-17T08:07:42.129Z] =================================================================================================================== 00:07:37.117 [2024-11-17T08:07:42.129Z] Total : 1792.40 112.02 0.00 0.00 898824.37 6494.02 1769233.69 00:07:38.495 00:07:38.495 real 0m8.530s 00:07:38.495 user 0m15.977s 00:07:38.495 sys 0m0.257s 00:07:38.495 08:07:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.495 08:07:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:38.495 ************************************ 00:07:38.495 END TEST bdev_verify_big_io 00:07:38.495 ************************************ 00:07:38.495 08:07:43 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.495 08:07:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.495 08:07:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.495 08:07:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.495 ************************************ 00:07:38.495 START TEST bdev_write_zeroes 00:07:38.495 ************************************ 00:07:38.495 08:07:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.754 [2024-11-17 08:07:43.558022] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:38.754 [2024-11-17 08:07:43.558170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62919 ] 00:07:38.754 [2024-11-17 08:07:43.720773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.013 [2024-11-17 08:07:43.802398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.580 Running I/O for 1 seconds... 00:07:40.514 53760.00 IOPS, 210.00 MiB/s 00:07:40.514 Latency(us) 00:07:40.514 [2024-11-17T08:07:45.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.514 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.514 Nvme0n1 : 1.04 7586.15 29.63 0.00 0.00 16827.51 13881.72 35031.97 00:07:40.514 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.514 Nvme1n1p1 : 1.04 7573.90 29.59 0.00 0.00 16823.29 14000.87 34078.72 00:07:40.514 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.514 Nvme1n1p2 : 1.04 7561.73 29.54 0.00 0.00 16778.62 13643.40 32887.16 00:07:40.514 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.514 Nvme2n1 : 1.04 7550.67 29.49 0.00 0.00 16744.08 13464.67 31933.91 00:07:40.514 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.514 Nvme2n2 : 1.04 7539.51 29.45 0.00 0.00 16725.04 10724.07 31218.97 00:07:40.514 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.514 Nvme2n3 : 1.05 7528.54 29.41 0.00 0.00 16708.80 10366.60 32887.16 00:07:40.514 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:40.514 Nvme3n1 : 1.05 7517.52 29.37 0.00 0.00 16689.82 9651.67 35031.97 00:07:40.514 [2024-11-17T08:07:45.527Z] =================================================================================================================== 00:07:40.515 [2024-11-17T08:07:45.527Z] Total : 52858.01 206.48 0.00 0.00 16756.74 9651.67 35031.97 00:07:41.452 00:07:41.452 real 0m2.846s 00:07:41.452 user 0m2.541s 00:07:41.452 sys 0m0.187s 00:07:41.452 08:07:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.452 ************************************ 00:07:41.452 END TEST bdev_write_zeroes 00:07:41.452 08:07:46 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:41.452 ************************************ 00:07:41.453 08:07:46 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.453 08:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:41.453 08:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.453 08:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.453 ************************************ 00:07:41.453 START TEST bdev_json_nonenclosed 00:07:41.453 ************************************ 00:07:41.453 08:07:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.453 [2024-11-17 08:07:46.455052] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:41.453 [2024-11-17 08:07:46.455197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62972 ] 00:07:41.712 [2024-11-17 08:07:46.614173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.712 [2024-11-17 08:07:46.693293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.712 [2024-11-17 08:07:46.693407] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:41.712 [2024-11-17 08:07:46.693431] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.712 [2024-11-17 08:07:46.693443] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.971 00:07:41.971 real 0m0.518s 00:07:41.971 user 0m0.321s 00:07:41.971 sys 0m0.093s 00:07:41.971 08:07:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.971 08:07:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:41.971 ************************************ 00:07:41.971 END TEST bdev_json_nonenclosed 00:07:41.971 ************************************ 00:07:41.971 08:07:46 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.971 08:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:41.971 08:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.971 08:07:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.971 ************************************ 00:07:41.971 START TEST bdev_json_nonarray 00:07:41.971 ************************************ 00:07:41.971 08:07:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.230 [2024-11-17 08:07:47.054052] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:42.230 [2024-11-17 08:07:47.054240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62998 ] 00:07:42.230 [2024-11-17 08:07:47.232899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.489 [2024-11-17 08:07:47.317525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.489 [2024-11-17 08:07:47.317617] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:42.489 [2024-11-17 08:07:47.317641] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:42.489 [2024-11-17 08:07:47.317652] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.749 00:07:42.749 real 0m0.562s 00:07:42.749 user 0m0.334s 00:07:42.749 sys 0m0.124s 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:42.749 ************************************ 00:07:42.749 END TEST bdev_json_nonarray 00:07:42.749 ************************************ 00:07:42.749 08:07:47 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:42.749 08:07:47 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:42.749 08:07:47 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:42.749 08:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.749 08:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.749 08:07:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.749 ************************************ 00:07:42.749 START TEST bdev_gpt_uuid 00:07:42.749 ************************************ 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63023 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63023 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63023 ']' 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.749 08:07:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:42.749 [2024-11-17 08:07:47.696001] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:42.749 [2024-11-17 08:07:47.696267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63023 ] 00:07:43.008 [2024-11-17 08:07:47.874667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.008 [2024-11-17 08:07:47.954848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.946 Some configs were skipped because the RPC state that can call them passed over. 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.946 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.206 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.206 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:44.206 { 00:07:44.206 "name": "Nvme1n1p1", 00:07:44.206 "aliases": [ 00:07:44.206 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:44.206 ], 00:07:44.206 "product_name": "GPT Disk", 00:07:44.206 "block_size": 4096, 00:07:44.206 "num_blocks": 655104, 00:07:44.206 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:44.206 "assigned_rate_limits": { 00:07:44.206 "rw_ios_per_sec": 0, 00:07:44.206 "rw_mbytes_per_sec": 0, 00:07:44.206 "r_mbytes_per_sec": 0, 00:07:44.206 "w_mbytes_per_sec": 0 00:07:44.206 }, 00:07:44.206 "claimed": false, 00:07:44.206 "zoned": false, 00:07:44.206 "supported_io_types": { 00:07:44.206 "read": true, 00:07:44.206 "write": true, 00:07:44.206 "unmap": true, 00:07:44.206 "flush": true, 00:07:44.206 "reset": true, 00:07:44.206 "nvme_admin": false, 00:07:44.206 "nvme_io": false, 00:07:44.206 "nvme_io_md": false, 00:07:44.206 "write_zeroes": true, 00:07:44.206 "zcopy": false, 00:07:44.206 "get_zone_info": false, 00:07:44.206 "zone_management": false, 00:07:44.206 "zone_append": false, 00:07:44.206 "compare": true, 00:07:44.206 "compare_and_write": false, 00:07:44.206 "abort": true, 00:07:44.206 "seek_hole": false, 00:07:44.206 "seek_data": false, 00:07:44.206 "copy": true, 00:07:44.206 "nvme_iov_md": false 00:07:44.206 }, 00:07:44.206 "driver_specific": { 00:07:44.206 "gpt": { 00:07:44.206 "base_bdev": "Nvme1n1", 00:07:44.206 "offset_blocks": 256, 00:07:44.206 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:44.206 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:44.206 "partition_name": "SPDK_TEST_first" 00:07:44.206 } 00:07:44.206 } 00:07:44.206 } 00:07:44.206 ]' 00:07:44.206 08:07:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:44.206 { 00:07:44.206 "name": "Nvme1n1p2", 00:07:44.206 "aliases": [ 00:07:44.206 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:44.206 ], 00:07:44.206 "product_name": "GPT Disk", 00:07:44.206 "block_size": 4096, 00:07:44.206 "num_blocks": 655103, 00:07:44.206 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:44.206 "assigned_rate_limits": { 00:07:44.206 "rw_ios_per_sec": 0, 00:07:44.206 "rw_mbytes_per_sec": 0, 00:07:44.206 "r_mbytes_per_sec": 0, 00:07:44.206 "w_mbytes_per_sec": 0 00:07:44.206 }, 00:07:44.206 "claimed": false, 00:07:44.206 "zoned": false, 00:07:44.206 "supported_io_types": { 00:07:44.206 "read": true, 00:07:44.206 "write": true, 00:07:44.206 "unmap": true, 00:07:44.206 "flush": true, 00:07:44.206 "reset": true, 00:07:44.206 "nvme_admin": false, 00:07:44.206 "nvme_io": false, 00:07:44.206 "nvme_io_md": false, 00:07:44.206 "write_zeroes": true, 00:07:44.206 "zcopy": false, 00:07:44.206 "get_zone_info": false, 00:07:44.206 "zone_management": false, 00:07:44.206 "zone_append": false, 00:07:44.206 "compare": true, 00:07:44.206 "compare_and_write": false, 00:07:44.206 "abort": true, 00:07:44.206 "seek_hole": false, 00:07:44.206 "seek_data": false, 00:07:44.206 "copy": true, 00:07:44.206 "nvme_iov_md": false 00:07:44.206 }, 00:07:44.206 "driver_specific": { 00:07:44.206 "gpt": { 00:07:44.206 "base_bdev": "Nvme1n1", 00:07:44.206 "offset_blocks": 655360, 00:07:44.206 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:44.206 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:44.206 "partition_name": "SPDK_TEST_second" 00:07:44.206 } 00:07:44.206 } 00:07:44.206 } 00:07:44.206 ]' 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:44.206 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63023 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63023 ']' 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63023 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63023 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.464 killing process with pid 63023 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63023' 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63023 00:07:44.464 08:07:49 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63023 00:07:46.370 00:07:46.370 real 0m3.340s 00:07:46.370 user 0m3.682s 00:07:46.370 sys 0m0.403s 00:07:46.370 08:07:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.370 08:07:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:46.370 ************************************ 00:07:46.370 END TEST bdev_gpt_uuid 00:07:46.370 ************************************ 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:46.370 08:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:46.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:46.629 Waiting for block devices as requested 00:07:46.629 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.629 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.887 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.887 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.294 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:52.294 08:07:56 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:52.294 08:07:56 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:52.294 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:52.294 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:52.294 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:52.294 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:52.294 08:07:57 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:52.294 00:07:52.294 real 0m59.953s 00:07:52.294 user 1m17.570s 00:07:52.294 sys 0m9.077s 00:07:52.294 08:07:57 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.294 08:07:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.294 ************************************ 00:07:52.294 END TEST blockdev_nvme_gpt 00:07:52.294 ************************************ 00:07:52.294 08:07:57 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:52.294 08:07:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.294 08:07:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.294 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:52.294 ************************************ 00:07:52.294 START TEST nvme 00:07:52.294 ************************************ 00:07:52.294 08:07:57 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:52.294 * Looking for test storage... 00:07:52.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:52.294 08:07:57 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.560 08:07:57 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.560 08:07:57 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.560 08:07:57 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.560 08:07:57 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.560 08:07:57 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.560 08:07:57 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.560 08:07:57 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.560 08:07:57 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:52.560 08:07:57 nvme -- scripts/common.sh@345 -- # : 1 00:07:52.560 08:07:57 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.560 08:07:57 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.560 08:07:57 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:52.560 08:07:57 nvme -- scripts/common.sh@353 -- # local d=1 00:07:52.560 08:07:57 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.560 08:07:57 nvme -- scripts/common.sh@355 -- # echo 1 00:07:52.560 08:07:57 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.560 08:07:57 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@353 -- # local d=2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.560 08:07:57 nvme -- scripts/common.sh@355 -- # echo 2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.560 08:07:57 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.560 08:07:57 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.560 08:07:57 nvme -- scripts/common.sh@368 -- # return 0 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.560 --rc genhtml_branch_coverage=1 00:07:52.560 --rc genhtml_function_coverage=1 00:07:52.560 --rc genhtml_legend=1 00:07:52.560 --rc geninfo_all_blocks=1 00:07:52.560 --rc geninfo_unexecuted_blocks=1 00:07:52.560 00:07:52.560 ' 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.560 --rc genhtml_branch_coverage=1 00:07:52.560 --rc genhtml_function_coverage=1 00:07:52.560 --rc genhtml_legend=1 00:07:52.560 --rc geninfo_all_blocks=1 00:07:52.560 --rc geninfo_unexecuted_blocks=1 00:07:52.560 00:07:52.560 ' 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.560 --rc genhtml_branch_coverage=1 00:07:52.560 --rc genhtml_function_coverage=1 00:07:52.560 --rc genhtml_legend=1 00:07:52.560 --rc geninfo_all_blocks=1 00:07:52.560 --rc geninfo_unexecuted_blocks=1 00:07:52.560 00:07:52.560 ' 00:07:52.560 08:07:57 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.560 --rc genhtml_branch_coverage=1 00:07:52.560 --rc genhtml_function_coverage=1 00:07:52.560 --rc genhtml_legend=1 00:07:52.560 --rc geninfo_all_blocks=1 00:07:52.560 --rc geninfo_unexecuted_blocks=1 00:07:52.560 00:07:52.560 ' 00:07:52.560 08:07:57 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:53.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.698 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.698 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.698 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.698 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.698 08:07:58 nvme -- nvme/nvme.sh@79 -- # uname 00:07:53.698 08:07:58 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:53.698 08:07:58 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:53.698 08:07:58 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:53.698 Waiting for stub to ready for secondary processes... 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1075 -- # stubpid=63666 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63666 ]] 00:07:53.698 08:07:58 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:53.957 [2024-11-17 08:07:58.744059] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:53.957 [2024-11-17 08:07:58.744433] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:54.894 [2024-11-17 08:07:59.626136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.894 08:07:59 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:54.894 08:07:59 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63666 ]] 00:07:54.894 08:07:59 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:54.894 [2024-11-17 08:07:59.746836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.894 [2024-11-17 08:07:59.746968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.894 [2024-11-17 08:07:59.746989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.894 [2024-11-17 08:07:59.769208] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:54.894 [2024-11-17 08:07:59.769257] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.894 [2024-11-17 08:07:59.782626] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:54.894 [2024-11-17 08:07:59.782754] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:54.894 [2024-11-17 08:07:59.785895] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.894 [2024-11-17 08:07:59.786390] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:54.894 [2024-11-17 08:07:59.786495] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:54.894 [2024-11-17 08:07:59.789560] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.894 [2024-11-17 08:07:59.790059] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:54.894 [2024-11-17 08:07:59.790198] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:54.894 [2024-11-17 08:07:59.793705] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:54.894 [2024-11-17 08:07:59.795159] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:54.894 [2024-11-17 08:07:59.795283] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:54.894 [2024-11-17 08:07:59.795374] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:54.894 [2024-11-17 08:07:59.795434] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:55.831 done. 00:07:55.831 08:08:00 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:55.831 08:08:00 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:55.831 08:08:00 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:55.831 08:08:00 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:55.831 08:08:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.831 08:08:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.831 ************************************ 00:07:55.831 START TEST nvme_reset 00:07:55.831 ************************************ 00:07:55.831 08:08:00 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:56.090 Initializing NVMe Controllers 00:07:56.090 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:56.090 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:56.090 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:56.090 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:56.090 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:56.090 00:07:56.090 real 0m0.314s 00:07:56.090 user 0m0.120s 00:07:56.090 sys 0m0.147s 00:07:56.090 08:08:01 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.090 ************************************ 00:07:56.090 END TEST nvme_reset 00:07:56.090 ************************************ 00:07:56.090 08:08:01 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:56.090 08:08:01 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:56.090 08:08:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.090 08:08:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.090 08:08:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.090 ************************************ 00:07:56.090 START TEST nvme_identify 00:07:56.090 ************************************ 00:07:56.090 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:56.090 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:56.090 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:56.090 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:56.090 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:56.090 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:56.090 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:56.090 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:56.090 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.090 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:56.349 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:56.349 08:08:01 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:56.349 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:56.610 [2024-11-17 08:08:01.412541] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63699 termina===================================================== 00:07:56.610 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.610 ===================================================== 00:07:56.610 Controller Capabilities/Features 00:07:56.610 ================================ 00:07:56.610 Vendor ID: 1b36 00:07:56.610 Subsystem Vendor ID: 1af4 00:07:56.610 Serial Number: 12340 00:07:56.610 Model Number: QEMU NVMe Ctrl 00:07:56.610 Firmware Version: 8.0.0 00:07:56.610 Recommended Arb Burst: 6 00:07:56.610 IEEE OUI Identifier: 00 54 52 00:07:56.610 Multi-path I/O 00:07:56.610 May have multiple subsystem ports: No 00:07:56.610 May have multiple controllers: No 00:07:56.610 Associated with SR-IOV VF: No 00:07:56.610 Max Data Transfer Size: 524288 00:07:56.610 Max Number of Namespaces: 256 00:07:56.610 Max Number of I/O Queues: 64 00:07:56.610 NVMe Specification Version (VS): 1.4 00:07:56.610 NVMe Specification Version (Identify): 1.4 00:07:56.610 Maximum Queue Entries: 2048 00:07:56.610 Contiguous Queues Required: Yes 00:07:56.610 Arbitration Mechanisms Supported 00:07:56.610 Weighted Round Robin: Not Supported 00:07:56.610 Vendor Specific: Not Supported 00:07:56.610 Reset Timeout: 7500 ms 00:07:56.610 Doorbell Stride: 4 bytes 00:07:56.610 NVM Subsystem Reset: Not Supported 00:07:56.610 Command Sets Supported 00:07:56.610 NVM Command Set: Supported 00:07:56.610 Boot Partition: Not Supported 00:07:56.610 Memory Page Size Minimum: 4096 bytes 00:07:56.611 Memory Page Size Maximum: 65536 bytes 00:07:56.611 Persistent Memory Region: Not Supported 00:07:56.611 Optional Asynchronous Events Supported 00:07:56.611 Namespace Attribute Notices: Supported 00:07:56.611 Firmware Activation Notices: Not Supported 00:07:56.611 ANA Change Notices: Not Supported 00:07:56.611 PLE Aggregate Log Change Notices: Not Supported 00:07:56.611 LBA Status Info Alert Notices: Not Supported 00:07:56.611 EGE Aggregate Log Change Notices: Not Supported 00:07:56.611 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.611 Zone Descriptor Change Notices: Not Supported 00:07:56.611 Discovery Log Change Notices: Not Supported 00:07:56.611 Controller Attributes 00:07:56.611 128-bit Host Identifier: Not Supported 00:07:56.611 Non-Operational Permissive Mode: Not Supported 00:07:56.611 NVM Sets: Not Supported 00:07:56.611 Read Recovery Levels: Not Supported 00:07:56.611 Endurance Groups: Not Supported 00:07:56.611 Predictable Latency Mode: Not Supported 00:07:56.611 Traffic Based Keep ALive: Not Supported 00:07:56.611 Namespace Granularity: Not Supported 00:07:56.611 SQ Associations: Not Supported 00:07:56.611 UUID List: Not Supported 00:07:56.611 Multi-Domain Subsystem: Not Supported 00:07:56.611 Fixed Capacity Management: Not Supported 00:07:56.611 Variable Capacity Management: Not Supported 00:07:56.611 Delete Endurance Group: Not Supported 00:07:56.611 Delete NVM Set: Not Supported 00:07:56.611 Extended LBA Formats Supported: Supported 00:07:56.611 Flexible Data Placement Supported: Not Supported 00:07:56.611 00:07:56.611 Controller Memory Buffer Support 00:07:56.611 ================================ 00:07:56.611 Supported: No 00:07:56.611 00:07:56.611 Persistent Memory Region Support 00:07:56.611 ================================ 00:07:56.611 Supported: No 00:07:56.611 00:07:56.611 Admin Command Set Attributes 00:07:56.611 ============================ 00:07:56.611 Security Send/Receive: Not Supported 00:07:56.611 Format NVM: Supported 00:07:56.611 Firmware Activate/Download: Not Supported 00:07:56.611 Namespace Management: Supported 00:07:56.611 Device Self-Test: Not Supported 00:07:56.611 Directives: Supported 00:07:56.611 NVMe-MI: Not Supported 00:07:56.611 Virtualization Management: Not Supported 00:07:56.611 Doorbell Buffer Config: Supported 00:07:56.611 Get LBA Status Capability: Not Supported 00:07:56.611 Command & Feature Lockdown Capability: Not Supported 00:07:56.611 Abort Command Limit: 4 00:07:56.611 Async Event Request Limit: 4 00:07:56.611 Number of Firmware Slots: N/A 00:07:56.611 Firmware Slot 1 Read-Only: N/A 00:07:56.611 Firmware Activation Without Reset: N/A 00:07:56.611 Multiple Update Detection Support: N/A 00:07:56.611 Firmware Update Granularity: No Information Provided 00:07:56.611 Per-Namespace SMART Log: Yes 00:07:56.611 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.611 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:56.611 Command Effects Log Page: Supported 00:07:56.611 Get Log Page Extended Data: Supported 00:07:56.611 Telemetry Log Pages: Not Supported 00:07:56.611 Persistent Event Log Pages: Not Supported 00:07:56.611 Supported Log Pages Log Page: May Support 00:07:56.611 Commands Supported & Effects Log Page: Not Supported 00:07:56.611 Feature Identifiers & Effects Log Page:May Support 00:07:56.611 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.611 Data Area 4 for Telemetry Log: Not Supported 00:07:56.611 Error Log Page Entries Supported: 1 00:07:56.611 Keep Alive: Not Supported 00:07:56.611 00:07:56.611 NVM Command Set Attributes 00:07:56.611 ========================== 00:07:56.611 Submission Queue Entry Size 00:07:56.611 Max: 64 00:07:56.611 Min: 64 00:07:56.611 Completion Queue Entry Size 00:07:56.611 Max: 16 00:07:56.611 Min: 16 00:07:56.611 Number of Namespaces: 256 00:07:56.611 Compare Command: Supported 00:07:56.611 Write Uncorrectable Command: Not Supported 00:07:56.611 Dataset Management Command: Supported 00:07:56.611 Write Zeroes Command: Supported 00:07:56.611 Set Features Save Field: Supported 00:07:56.611 Reservations: Not Supported 00:07:56.611 Timestamp: Supported 00:07:56.611 Copy: Supported 00:07:56.611 Volatile Write Cache: Present 00:07:56.611 Atomic Write Unit (Normal): 1 00:07:56.611 Atomic Write Unit (PFail): 1 00:07:56.611 Atomic Compare & Write Unit: 1 00:07:56.611 Fused Compare & Write: Not Supported 00:07:56.611 Scatter-Gather List 00:07:56.611 SGL Command Set: Supported 00:07:56.611 SGL Keyed: Not Supported 00:07:56.611 SGL Bit Bucket Descriptor: Not Supported 00:07:56.611 SGL Metadata Pointer: Not Supported 00:07:56.611 Oversized SGL: Not Supported 00:07:56.611 SGL Metadata Address: Not Supported 00:07:56.611 SGL Offset: Not Supported 00:07:56.611 Transport SGL Data Block: Not Supported 00:07:56.611 Replay Protected Memory Block: Not Supported 00:07:56.611 00:07:56.611 Firmware Slot Information 00:07:56.611 ========================= 00:07:56.611 Active slot: 1 00:07:56.611 Slot 1 Firmware Revision: 1.0 00:07:56.611 00:07:56.611 00:07:56.611 Commands Supported and Effects 00:07:56.611 ============================== 00:07:56.611 Admin Commands 00:07:56.611 -------------- 00:07:56.611 Delete I/O Submission Queue (00h): Supported 00:07:56.611 Create I/O Submission Queue (01h): Supported 00:07:56.611 Get Log Page (02h): Supported 00:07:56.611 Delete I/O Completion Queue (04h): Supported 00:07:56.611 Create I/O Completion Queue (05h): Supported 00:07:56.611 Identify (06h): Supported 00:07:56.611 Abort (08h): Supported 00:07:56.611 Set Features (09h): Supported 00:07:56.611 Get Features (0Ah): Supported 00:07:56.611 Asynchronous Event Request (0Ch): Supported 00:07:56.611 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.611 Directive Send (19h): Supported 00:07:56.611 Directive Receive (1Ah): Supported 00:07:56.611 Virtualization Management (1Ch): Supported 00:07:56.611 Doorbell Buffer Config (7Ch): Supported 00:07:56.611 Format NVM (80h): Supported LBA-Change 00:07:56.611 I/O Commands 00:07:56.611 ------------ 00:07:56.611 Flush (00h): Supported LBA-Change 00:07:56.611 Write (01h): Supported LBA-Change 00:07:56.611 Read (02h): Supported 00:07:56.611 Compare (05h): Supported 00:07:56.611 Write Zeroes (08h): Supported LBA-Change 00:07:56.611 Dataset Management (09h): Supported LBA-Change 00:07:56.611 Unknown (0Ch): Supported 00:07:56.611 Unknown (12h): Supported 00:07:56.611 Copy (19h): Supported LBA-Change 00:07:56.611 Unknown (1Dh): Supported LBA-Change 00:07:56.611 00:07:56.611 Error Log 00:07:56.611 ========= 00:07:56.611 00:07:56.611 Arbitration 00:07:56.611 =========== 00:07:56.611 Arbitration Burst: no limit 00:07:56.611 00:07:56.611 Power Management 00:07:56.611 ================ 00:07:56.611 Number of Power States: 1 00:07:56.611 Current Power State: Power State #0 00:07:56.611 Power State #0: 00:07:56.611 Max Power: 25.00 W 00:07:56.611 Non-Operational State: Operational 00:07:56.611 Entry Latency: 16 microseconds 00:07:56.611 Exit Latency: 4 microseconds 00:07:56.611 Relative Read Throughput: 0 00:07:56.611 Relative Read Latency: 0 00:07:56.611 Relative Write Throughput: 0 00:07:56.611 Relative Write Latency: 0 00:07:56.611 Idle Power: Not Reported 00:07:56.611 Active Power: Not Reported 00:07:56.611 Non-Operational Permissive Mode: Not Supported 00:07:56.611 00:07:56.611 Health Information 00:07:56.611 ================== 00:07:56.611 Critical Warnings: 00:07:56.611 Available Spare Space: OK 00:07:56.611 Temperature: OK 00:07:56.611 Device Reliability: OK 00:07:56.611 Read Only: No 00:07:56.611 Volatile Memory Backup: OK 00:07:56.611 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.611 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.611 Available Spare: 0% 00:07:56.611 Available Spare Threshold: 0% 00:07:56.611 Life Percentage Used: 0% 00:07:56.611 Data Units Read: 679 00:07:56.611 Data Units Written: 607 00:07:56.611 Host Read Commands: 32562 00:07:56.611 Host Write Commands: 32348 00:07:56.611 Controller Busy Time: 0 minutes 00:07:56.611 Power Cycles: 0 00:07:56.611 Power On Hours: 0 hours 00:07:56.611 Unsafe Shutdowns: 0 00:07:56.611 Unrecoverable Media Errors: 0 00:07:56.611 Lifetime Error Log Entries: 0 00:07:56.611 Warning Temperature Time: 0 minutes 00:07:56.611 Critical Temperature Time: 0 minutes 00:07:56.611 00:07:56.611 Number of Queues 00:07:56.611 ================ 00:07:56.611 Number of I/O Submission Queues: 64 00:07:56.611 Number of I/O Completion Queues: 64 00:07:56.611 00:07:56.611 ZNS Specific Controller Data 00:07:56.611 ============================ 00:07:56.612 Zone Append Size Limit: 0 00:07:56.612 00:07:56.612 00:07:56.612 Active Namespaces 00:07:56.612 ================= 00:07:56.612 Namespace ID:1 00:07:56.612 Error Recovery Timeout: Unlimited 00:07:56.612 Command Set Identifier: NVM (00h) 00:07:56.612 Deallocate: Supported 00:07:56.612 Deallocated/Unwritten Error: Supported 00:07:56.612 Deallocated Read Value: All 0x00 00:07:56.612 Deallocate in Write Zeroes: Not Supported 00:07:56.612 Deallocated Guard Field: 0xFFFF 00:07:56.612 Flush: Supported 00:07:56.612 Reservation: Not Supported 00:07:56.612 Metadata Transferred as: Separate Metadata Buffer 00:07:56.612 Namespace Sharing Capabilities: Private 00:07:56.612 Size (in LBAs): 1548666 (5GiB) 00:07:56.612 Capacity (in LBAs): 1548666 (5GiB) 00:07:56.612 Utilization (in LBAs): 1548666 (5GiB) 00:07:56.612 Thin Provisioning: Not Supported 00:07:56.612 Per-NS Atomic Units: No 00:07:56.612 Maximum Single Source Range Length: 128 00:07:56.612 Maximum Copy Length: 128 00:07:56.612 Maximum Source Range Count: 128 00:07:56.612 NGUID/EUI64 Never Reused: No 00:07:56.612 Namespace Write Protected: No 00:07:56.612 Number of LBA Formats: 8 00:07:56.612 Current LBA Format: LBA Format #07 00:07:56.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.612 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.612 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.612 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.612 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.612 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.612 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.612 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.612 00:07:56.612 NVM Specific Namespace Data 00:07:56.612 =========================== 00:07:56.612 Logical Block Storage Tag Mask: 0 00:07:56.612 Protection Information Capabilities: 00:07:56.612 16b Guard Protection Information Storage Tag Support: No 00:07:56.612 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.612 Storage Tag Check Read Support: No 00:07:56.612 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.612 ===================================================== 00:07:56.612 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:56.612 ===================================================== 00:07:56.612 Controller Capabilities/Features 00:07:56.612 ================================ 00:07:56.612 Vendor ID: 1b36 00:07:56.612 Subsystem Vendor ID: 1af4 00:07:56.612 Serial Number: 12341 00:07:56.612 Model Number: QEMU NVMe Ctrl 00:07:56.612 Firmware Version: 8.0.0 00:07:56.612 Recommended Arb Burst: 6 00:07:56.612 IEEE OUI Identifier: 00 54 52 00:07:56.612 Multi-path I/O 00:07:56.612 May have multiple subsystem ports: No 00:07:56.612 May have multiple controllers: No 00:07:56.612 Associated with SR-IOV VF: No 00:07:56.612 Max Data Transfer Size: 524288 00:07:56.612 Max Number of Namespaces: 256 00:07:56.612 Max Number of I/O Queues: 64 00:07:56.612 NVMe Specification Version (VS): 1.4 00:07:56.612 NVMe Specification Version (Identify): 1.4 00:07:56.612 Maximum Queue Entries: 2048 00:07:56.612 Contiguous Queues Required: Yes 00:07:56.612 Arbitration Mechanisms Supported 00:07:56.612 Weighted Round Robin: Not Supported 00:07:56.612 Vendor Specific: Not Supported 00:07:56.612 Reset Timeout: 7500 ms 00:07:56.612 Doorbell Stride: 4 bytes 00:07:56.612 NVM Subsystem Reset: Not Supported 00:07:56.612 Command Sets Supported 00:07:56.612 NVM Command Set: Supported 00:07:56.612 Boot Partition: Not Supported 00:07:56.612 Memory Page Size Minimum: 4096 bytes 00:07:56.612 Memory Page Size Maximum: 65536 bytes 00:07:56.612 Persistent Memory Region: Not Supported 00:07:56.612 Optional Asynchronous Events Supported 00:07:56.612 Namespace Attribute Notices: Supported 00:07:56.612 Firmware Activation Notices: Not Supported 00:07:56.612 ANA Change Notices: Not Supported 00:07:56.612 PLE Aggregate Log Change Notices: Not Supported 00:07:56.612 LBA Status Info Alert Notices: Not Supported 00:07:56.612 EGE Aggregate Log Change Notices: Not Supported 00:07:56.612 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.612 Zone Descriptor Change Notices: Not Supported 00:07:56.612 Discovery Log Change Notices: Not Supported 00:07:56.612 Controller Attributes 00:07:56.612 128-bit Host Identifier: Not Supported 00:07:56.612 Non-Operational Permissive Mode: Not Supported 00:07:56.612 NVM Sets: Not Supported 00:07:56.612 Read Recovery Levels: Not Supported 00:07:56.612 Endurance Groups: Not Supported 00:07:56.612 Predictable Latency Mode: Not Supported 00:07:56.612 Traffic Based Keep ALive: Not Supported 00:07:56.612 Namespace Granularity: Not Supported 00:07:56.612 SQ Associations: Not Supported 00:07:56.612 UUID List: Not Supported 00:07:56.612 Multi-Domain Subsystem: Not Supported 00:07:56.612 Fixed Capacity Management: Not Supported 00:07:56.612 Variable Capacity Management: Not Supported 00:07:56.612 Delete Endurance Group: Not Supported 00:07:56.612 Delete NVM Set: Not Supported 00:07:56.612 Extended LBA Formats Supported: Supported 00:07:56.612 Flexible Data Placement Supported: Not Supported 00:07:56.612 00:07:56.612 Controller Memory Buffer Support 00:07:56.612 ================================ 00:07:56.612 Supported: No 00:07:56.612 00:07:56.612 Persistent Memory Region Support 00:07:56.612 ================================ 00:07:56.612 Supported: No 00:07:56.612 00:07:56.612 Admin Command Set Attributes 00:07:56.612 ============================ 00:07:56.612 Security Send/Receive: Not Supported 00:07:56.612 Format NVM: Supported 00:07:56.612 Firmware Activate/Download: Not Supported 00:07:56.612 Namespace Management: Supported 00:07:56.612 Device Self-Test: Not Supported 00:07:56.612 Directives: Supported 00:07:56.612 NVMe-MI: Not Supported 00:07:56.612 Virtualization Management: Not Supported 00:07:56.612 Doorbell Buffer Config: Supported 00:07:56.612 Get LBA Status Capability: Not Supported 00:07:56.612 Command & Feature Lockdown Capability: Not Supported 00:07:56.612 Abort Command Limit: 4 00:07:56.612 Async Event Request Limit: 4 00:07:56.612 Number of Firmware Slots: N/A 00:07:56.612 Firmware Slot 1 Read-Only: N/A 00:07:56.612 Firmware Activation Without Reset: N/A 00:07:56.612 Multiple Update Detection Support: N/A 00:07:56.612 Firmware Update Granularity: No Information Provided 00:07:56.612 Per-Namespace SMART Log: Yes 00:07:56.612 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.612 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:56.612 Command Effects Log Page: Supported 00:07:56.612 Get Log Page Extended Data: Supported 00:07:56.612 Telemetry Log Pages: Not Supported 00:07:56.612 Persistent Event Log Pages: Not Supported 00:07:56.612 Supported Log Pages Log Page: May Support 00:07:56.612 Commands Supported & Effects Log Page: Not Supported 00:07:56.612 Feature Identifiers & Effects Log Page:May Support 00:07:56.612 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.612 Data Area 4 for Telemetry Log: Not Supported 00:07:56.612 Error Log Page Entries Supported: 1 00:07:56.612 Keep Alive: Not Supported 00:07:56.612 00:07:56.612 NVM Command Set Attributes 00:07:56.612 ========================== 00:07:56.612 Submission Queue Entry Size 00:07:56.612 Max: 64 00:07:56.612 Min: 64 00:07:56.612 Completion Queue Entry Size 00:07:56.612 Max: 16 00:07:56.612 Min: 16 00:07:56.612 Number of Namespaces: 256 00:07:56.612 Compare Command: Supported 00:07:56.612 Write Uncorrectable Command: Not Supported 00:07:56.612 Dataset Management Command: Supported 00:07:56.612 Write Zeroes Command: Supported 00:07:56.612 Set Features Save Field: Supported 00:07:56.612 Reservations: Not Supported 00:07:56.612 Timestamp: Supported 00:07:56.612 Copy: Supported 00:07:56.612 Volatile Write Cache: Present 00:07:56.612 Atomic Write Unit (Normal): 1 00:07:56.612 Atomic Write Unit (PFail): 1 00:07:56.612 Atomic Compare & Write Unit: 1 00:07:56.612 Fused Compare & Write: Not Supported 00:07:56.612 Scatter-Gather List 00:07:56.612 SGL Command Set: Supported 00:07:56.612 SGL Keyed: Not Supported 00:07:56.613 SGL Bit Bucket Descriptor: Not Supported 00:07:56.613 SGL Metadata Pointer: Not Supported 00:07:56.613 Oversized SGL: Not Supported 00:07:56.613 SGL Metadata Address: Not Supported 00:07:56.613 SGL Offset: Not Supported 00:07:56.613 Transport SGL Data Block: Not Supported 00:07:56.613 Replay Protected Memory Block: Not Supported 00:07:56.613 00:07:56.613 Firmware Slot Information 00:07:56.613 ========================= 00:07:56.613 Active slot: 1 00:07:56.613 Slot 1 Firmware Revision: 1.0 00:07:56.613 00:07:56.613 00:07:56.613 Commands Supported and Effects 00:07:56.613 ============================== 00:07:56.613 Admin Commands 00:07:56.613 -------------- 00:07:56.613 Delete I/O Submission Queue (00h): Supported 00:07:56.613 Create I/O Submission Queue (01h): Supported 00:07:56.613 Get Log Page (02h): Supported 00:07:56.613 Delete I/O Completion Queue (04h): Supported 00:07:56.613 Create I/O Completion Queue (05h): Supported 00:07:56.613 Identify (06h): Supported 00:07:56.613 Abort (08h): Supported 00:07:56.613 Set Features (09h): Supported 00:07:56.613 Get Features (0Ah): Supported 00:07:56.613 Asynchronous Event Request (0Ch): Supported 00:07:56.613 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.613 Directive Send (19h): Supported 00:07:56.613 Directive Receive (1Ah): Supported 00:07:56.613 Virtualization Management (1Ch): Supported 00:07:56.613 Doorbell Buffer Config (7Ch): Supported 00:07:56.613 Format NVM (80h): Supported LBA-Change 00:07:56.613 I/O Commands 00:07:56.613 ------------ 00:07:56.613 Flush (00h): Supported LBA-Change 00:07:56.613 Write (01h): Supported LBA-Change 00:07:56.613 Read (02h): Supported 00:07:56.613 Compare (05h): Supported 00:07:56.613 Write Zeroes (08h): Supported LBA-Change 00:07:56.613 Dataset Management (09h): Supported LBA-Change 00:07:56.613 Unknown (0Ch): Supported 00:07:56.613 Unknown (12h): Supported 00:07:56.613 Copy (19h): Supported LBA-Change 00:07:56.613 Unknown (1Dh): Supported LBA-Change 00:07:56.613 00:07:56.613 Error Log 00:07:56.613 ========= 00:07:56.613 00:07:56.613 Arbitration 00:07:56.613 =========== 00:07:56.613 Arbitration Burst: no limit 00:07:56.613 00:07:56.613 Power Management 00:07:56.613 ================ 00:07:56.613 Number of Power States: 1 00:07:56.613 Current Power State: Power State #0 00:07:56.613 Power State #0: 00:07:56.613 Max Power: 25.00 W 00:07:56.613 Non-Operational State: Operational 00:07:56.613 Entry Latency: 16 microseconds 00:07:56.613 Exit Latency: 4 microseconds 00:07:56.613 Relative Read Throughput: 0 00:07:56.613 Relative Read Latency: 0 00:07:56.613 Relative Write Throughput: 0 00:07:56.613 Relative Write Latency: 0 00:07:56.613 Idle Power: Not Reported 00:07:56.613 Active Power: Not Reported 00:07:56.613 Non-Operational Permissive Mode: Not Supported 00:07:56.613 00:07:56.613 Health Information 00:07:56.613 ================== 00:07:56.613 Critical Warnings: 00:07:56.613 Available Spare Space: OK 00:07:56.613 Temperature: OK 00:07:56.613 Device Reliability: OK 00:07:56.613 Read Only: No 00:07:56.613 Volatile Memory Backup: OK 00:07:56.613 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.613 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.613 Available Spare: 0% 00:07:56.613 Available Spare Threshold: 0% 00:07:56.613 Life Percentage Used: 0% 00:07:56.613 Data Units Read: 1034 00:07:56.613 Data Units Written: 901 00:07:56.613 Host Read Commands: 48758 00:07:56.613 Host Write Commands: 47551 00:07:56.613 Controller Busy Time: 0 minutes 00:07:56.613 Power Cycles: 0 00:07:56.613 Power On Hours: 0 hours 00:07:56.613 Unsafe Shutdowns: 0 00:07:56.613 Unrecoverable Media Errors: 0 00:07:56.613 Lifetime Error Log Entries: 0 00:07:56.613 Warning Temperature Time: 0 minutes 00:07:56.613 Critical Temperature Time: 0 minutes 00:07:56.613 00:07:56.613 Number of Queues 00:07:56.613 ================ 00:07:56.613 Number of I/O Submission Queues: 64 00:07:56.613 Number of I/O Completion Queues: 64 00:07:56.613 00:07:56.613 ZNS Specific Controller Data 00:07:56.613 ============================ 00:07:56.613 Zone Append Size Limit: 0 00:07:56.613 00:07:56.613 00:07:56.613 Active Namespaces 00:07:56.613 ================= 00:07:56.613 Namespace ID:1 00:07:56.613 Error Recovery Timeout: Unlimited 00:07:56.613 Command Set Identifier: NVM (00h) 00:07:56.613 Deallocate: Supported 00:07:56.613 Deallocated/Unwritten Error: Supported 00:07:56.613 Deallocated Read Value: All 0x00 00:07:56.613 Deallocate in Write Zeroes: Not Supported 00:07:56.613 Deallocated Guard Field: 0xFFFF 00:07:56.613 Flush: Supported 00:07:56.613 Reservation: Not Supported 00:07:56.613 Namespace Sharing Capabilities: Private 00:07:56.613 Size (in LBAs): 1310720 (5GiB) 00:07:56.613 Capacity (in LBAs): 1310720 (5GiB) 00:07:56.613 Utilization (in LBAs): 1310720 (5GiB) 00:07:56.613 Thin Provisioning: Not Supported 00:07:56.613 Per-NS Atomic Units: No 00:07:56.613 Maximum Single Source Range Length: 128 00:07:56.613 Maximum Copy Length: 128 00:07:56.613 Maximum Source Range Count: 128 00:07:56.613 NGUID/EUI64 Never Reused: No 00:07:56.613 Namespace Write Protected: No 00:07:56.613 Number of LBA Formats: 8 00:07:56.613 Current LBA Format: LBA Format #04 00:07:56.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.613 00:07:56.613 NVM Specific Namespace Data 00:07:56.613 =========================== 00:07:56.613 Logical Block Storage Tag Mask: 0 00:07:56.613 Protection Information Capabilities: 00:07:56.613 16b Guard Protection Information Storage Tag Support: No 00:07:56.613 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.613 Storage Tag Check Read Support: No 00:07:56.613 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.613 ===================================================== 00:07:56.613 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:56.613 ===================================================== 00:07:56.613 Controller Capabilities/Features 00:07:56.613 ================================ 00:07:56.613 Vendor ID: 1b36 00:07:56.613 Subsystem Vendor ID: 1af4 00:07:56.613 Serial Number: 12343 00:07:56.613 Model Number: QEMU NVMe Ctrl 00:07:56.613 Firmware Version: 8.0.0 00:07:56.613 Recommended Arb Burst: 6 00:07:56.613 IEEE OUI Identifier: 00 54 52 00:07:56.613 Multi-path I/O 00:07:56.613 May have multiple subsystem ports: No 00:07:56.613 May have multiple controllers: Yes 00:07:56.613 Associated with SR-IOV VF: No 00:07:56.613 Max Data Transfer Size: 524288 00:07:56.613 Max Number of Namespaces: 256 00:07:56.613 Max Number of I/O Queues: 64 00:07:56.613 NVMe Specification Version (VS): 1.4 00:07:56.613 NVMe Specification Version (Identify): 1.4 00:07:56.613 Maximum Queue Entries: 2048 00:07:56.613 Contiguous Queues Required: Yes 00:07:56.613 Arbitration Mechanisms Supported 00:07:56.613 Weighted Round Robin: Not Supported 00:07:56.613 Vendor Specific: Not Supported 00:07:56.613 Reset Timeout: 7500 ms 00:07:56.613 Doorbell Stride: 4 bytes 00:07:56.613 NVM Subsystem Reset: Not Supported 00:07:56.613 Command Sets Supported 00:07:56.613 NVM Command Set: Supported 00:07:56.613 Boot Partition: Not Supported 00:07:56.613 Memory Page Size Minimum: 4096 bytes 00:07:56.613 Memory Page Size Maximum: 65536 bytes 00:07:56.613 Persistent Memory Region: Not Supported 00:07:56.613 Optional Asynchronous Events Supported 00:07:56.613 Namespace Attribute Notices: Supported 00:07:56.613 Firmware Activation Notices: Not Supported 00:07:56.613 ANA Change Notices: Not Supported 00:07:56.613 PLE Aggregate Log Change Notices: Not Supported 00:07:56.613 LBA Status Info Alert Notices: Not Supported 00:07:56.613 EGE Aggregate Log Change Notices: Not Supported 00:07:56.614 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.614 Zone Descriptor Change Notices: Not Supported 00:07:56.614 Discovery Log Change Notices: Not Supported 00:07:56.614 Controller Attributes 00:07:56.614 128-bit Host Identifier: Not Supported 00:07:56.614 Non-Operational Permissive Mode: Not Supported 00:07:56.614 NVM Sets: Not Supported 00:07:56.614 Read Recovery Levels: Not Supported 00:07:56.614 Endurance Groups: Supported 00:07:56.614 Predictable Latency Mode: Not Supported 00:07:56.614 Traffic Based Keep ALive: Not Supported 00:07:56.614 Namespace Granularity: Not Supported 00:07:56.614 SQ Associations: Not Supported 00:07:56.614 UUID List: Not Supported 00:07:56.614 Multi-Domain Subsystem: Not Supported 00:07:56.614 Fixed Capacity Management: Not Supported 00:07:56.614 Variable Capacity Management: Not Supported 00:07:56.614 Delete Endurance Group: Not Supported 00:07:56.614 Delete NVM Set: Not Supported 00:07:56.614 Extended LBA Formats Supported: Supported 00:07:56.614 Flexible Data Placement Supported: Supported 00:07:56.614 00:07:56.614 Controller Memory Buffer Support 00:07:56.614 ================================ 00:07:56.614 Supported: No 00:07:56.614 00:07:56.614 Persistent Memory Region Support 00:07:56.614 ================================ 00:07:56.614 Supported: No 00:07:56.614 00:07:56.614 Admin Command Set Attributes 00:07:56.614 ============================ 00:07:56.614 Security Send/Receive: Not Supported 00:07:56.614 Format NVM: Supported 00:07:56.614 Firmware Activate/Download: Not Supported 00:07:56.614 Namespace Management: Supported 00:07:56.614 Device Self-Test: Not Supported 00:07:56.614 Directives: Supported 00:07:56.614 NVMe-MI: Not Supported 00:07:56.614 Virtualization Management: Not Supported 00:07:56.614 Doorbell Buffer Config: Supported 00:07:56.614 Get LBA Status Capability: Not Supported 00:07:56.614 Command & Feature Lockdown Capability: Not Supported 00:07:56.614 Abort Command Limit: 4 00:07:56.614 Async Event Request Limit: 4 00:07:56.614 Number of Firmware Slots: N/A 00:07:56.614 Firmware Slot 1 Read-Only: N/A 00:07:56.614 Firmware Activation Without Reset: N/A 00:07:56.614 Multiple Update Detection Support: N/A 00:07:56.614 Firmware Update Granularity: No Information Provided 00:07:56.614 Per-Namespace SMART Log: Yes 00:07:56.614 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.614 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:56.614 Command Effects Log Page: Supported 00:07:56.614 Get Log Page Extended Data: Supported 00:07:56.614 Telemetry Log Pages: Not Supported 00:07:56.614 Persistent Event Log Pages: Not Supported 00:07:56.614 Supported Log Pages Log Page: May Support 00:07:56.614 Commands Supported & Effects Log Page: Not Supported 00:07:56.614 Feature Identifiers & Effects Log Page:May Support 00:07:56.614 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.614 Data Area 4 for Telemetry Log: Not Supported 00:07:56.614 Error Log Page Entries Supported: 1 00:07:56.614 Keep Alive: Not Supported 00:07:56.614 00:07:56.614 NVM Command Set Attributes 00:07:56.614 ========================== 00:07:56.614 Submission Queue Entry Size 00:07:56.614 Max: 64 00:07:56.614 Min: 64 00:07:56.614 Completion Queue Entry Size 00:07:56.614 Max: 16 00:07:56.614 Min: 16 00:07:56.614 Number of Namespaces: 256 00:07:56.614 Compare Command: Supported 00:07:56.614 Write Uncorrectable Command: Not Supported 00:07:56.614 Dataset Management Command: Supported 00:07:56.614 Write Zeroes Command: Supported 00:07:56.614 Set Features Save Field: Supported 00:07:56.614 Reservations: Not Supported 00:07:56.614 Timestamp: Supported 00:07:56.614 Copy: Supported 00:07:56.614 Volatile Write Cache: Present 00:07:56.614 Atomic Write Unit (Normal): 1 00:07:56.614 Atomic Write Unit (PFail): 1 00:07:56.614 Atomic Compare & Write Unit: 1 00:07:56.614 Fused Compare & Write: Not Supported 00:07:56.614 Scatter-Gather List 00:07:56.614 SGL Command Set: Supported 00:07:56.614 SGL Keyed: Not Supported 00:07:56.614 SGL Bit Bucket Descriptor: Not Supported 00:07:56.614 SGL Metadata Pointer: Not Supported 00:07:56.614 Oversized SGL: Not Supported 00:07:56.614 SGL Metadata Address: Not Supported 00:07:56.614 SGL Offset: Not Supported 00:07:56.614 Transport SGL Data Block: Not Supported 00:07:56.614 Replay Protected Memory Block: Not Supported 00:07:56.614 00:07:56.614 Firmware Slot Information 00:07:56.614 ========================= 00:07:56.614 Active slot: 1 00:07:56.614 Slot 1 Firmware Revision: 1.0 00:07:56.614 00:07:56.614 00:07:56.614 Commands Supported and Effects 00:07:56.614 ============================== 00:07:56.614 Admin Commands 00:07:56.614 -------------- 00:07:56.614 Delete I/O Submission Queue (00h): Supported 00:07:56.614 Create I/O Submission Queue (01h): Supported 00:07:56.614 Get Log Page (02h): Supported 00:07:56.614 Delete I/O Completion Queue (04h): Supported 00:07:56.614 Create I/O Completion Queue (05h): Supported 00:07:56.614 Identify (06h): Supported 00:07:56.614 Abort (08h): Supported 00:07:56.614 Set Features (09h): Supported 00:07:56.614 Get Features (0Ah): Supported 00:07:56.614 Asynchronous Event Request (0Ch): Supported 00:07:56.614 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.614 Directive Send (19h): Supported 00:07:56.614 Directive Receive (1Ah): Supported 00:07:56.614 Virtualization Management (1Ch): Supported 00:07:56.614 Doorbell Buffer Config (7Ch): Supported 00:07:56.614 Format NVM (80h): Supported LBA-Change 00:07:56.614 I/O Commands 00:07:56.614 ------------ 00:07:56.614 Flush (00h): Supported LBA-Change 00:07:56.614 Write (01h): Supported LBA-Change 00:07:56.614 Read (02h): Supported 00:07:56.614 Compare (05h): Supported 00:07:56.614 Write Zeroes (08h): Supported LBA-Change 00:07:56.614 Dataset Management (09h): Supported LBA-Change 00:07:56.614 Unknown (0Ch): Supported 00:07:56.614 Unknown (12h): Supported 00:07:56.614 Copy (19h): Supported LBA-Change 00:07:56.614 Unknown (1Dh): Supported LBA-Change 00:07:56.614 00:07:56.614 Error Log 00:07:56.614 ========= 00:07:56.614 00:07:56.614 Arbitration 00:07:56.614 =========== 00:07:56.614 Arbitration Burst: no limit 00:07:56.614 00:07:56.614 Power Management 00:07:56.614 ================ 00:07:56.614 Number of Power States: 1 00:07:56.614 Current Power State: Power State #0 00:07:56.614 Power State #0: 00:07:56.614 Max Power: 25.00 W 00:07:56.614 Non-Operational State: Operational 00:07:56.614 Entry Latency: 16 microseconds 00:07:56.614 Exit Latency: 4 microseconds 00:07:56.614 Relative Read Throughput: 0 00:07:56.614 Relative Read Latency: 0 00:07:56.614 Relative Write Throughput: 0 00:07:56.614 Relative Write Latency: 0 00:07:56.614 Idle Power: Not Reported 00:07:56.614 Active Power: Not Reported 00:07:56.614 Non-Operational Permissive Mode: Not Supported 00:07:56.614 00:07:56.614 Health Information 00:07:56.614 ================== 00:07:56.614 Critical Warnings: 00:07:56.614 Available Spare Space: OK 00:07:56.614 Temperature: OK 00:07:56.614 Device Reliability: OK 00:07:56.614 Read Only: No 00:07:56.614 Volatile Memory Backup: OK 00:07:56.614 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.614 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.614 Available Spare: 0% 00:07:56.614 Available Spare Threshold: 0% 00:07:56.614 Life Percentage Used: 0% 00:07:56.614 Data Units Read: 778 00:07:56.614 Data Units Written: 707 00:07:56.614 Host Read Commands: 33876 00:07:56.614 Host Write Commands: 33299 00:07:56.614 Controller Busy Time: 0 minutes 00:07:56.614 Power Cycles: 0 00:07:56.614 Power On Hours: 0 hours 00:07:56.614 Unsafe Shutdowns: 0 00:07:56.614 Unrecoverable Media Errors: 0 00:07:56.614 Lifetime Error Log Entries: 0 00:07:56.614 Warning Temperature Time: 0 minutes 00:07:56.614 Critical Temperature Time: 0 minutes 00:07:56.614 00:07:56.614 Number of Queues 00:07:56.614 ================ 00:07:56.614 Number of I/O Submission Queues: 64 00:07:56.614 Number of I/O Completion Queues: 64 00:07:56.614 00:07:56.614 ZNS Specific Controller Data 00:07:56.614 ============================ 00:07:56.614 Zone Append Size Limit: 0 00:07:56.614 00:07:56.614 00:07:56.614 Active Namespaces 00:07:56.614 ================= 00:07:56.614 Namespace ID:1 00:07:56.614 Error Recovery Timeout: Unlimited 00:07:56.614 Command Set Identifier: NVM (00h) 00:07:56.614 Deallocate: Supported 00:07:56.614 Deallocated/Unwritten Error: Supported 00:07:56.614 Deallocated Read Value: All 0x00 00:07:56.614 Deallocate in Write Zeroes: Not Supported 00:07:56.614 Deallocated Guard Field: 0xFFFF 00:07:56.615 Flush: Supported 00:07:56.615 Reservation: Not Supported 00:07:56.615 Namespace Sharing Capabilities: Multiple Controllers 00:07:56.615 Size (in LBAs): 262144 (1GiB) 00:07:56.615 Capacity (in LBAs): 262144 (1GiB) 00:07:56.615 Utilization (in LBAs): 262144 (1GiB) 00:07:56.615 Thin Provisioning: Not Supported 00:07:56.615 Per-NS Atomic Units: No 00:07:56.615 Maximum Single Source Range Length: 128 00:07:56.615 Maximum Copy Length: 128 00:07:56.615 Maximum Source Range Count: 128 00:07:56.615 NGUID/EUI64 Never Reused: No 00:07:56.615 Namespace Write Protected: No 00:07:56.615 Endurance group ID: 1 00:07:56.615 Number of LBA Formats: 8 00:07:56.615 Current LBA Format: LBA Format #04 00:07:56.615 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.615 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.615 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.615 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.615 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.615 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.615 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.615 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.615 00:07:56.615 Get Feature FDP: 00:07:56.615 ================ 00:07:56.615 Enabled: Yes 00:07:56.615 FDP configuration index: 0 00:07:56.615 00:07:56.615 FDP configurations log page 00:07:56.615 =========================== 00:07:56.615 Number of FDP configurations: 1 00:07:56.615 Version: 0 00:07:56.615 Size: 112 00:07:56.615 FDP Configuration Descriptor: 0 00:07:56.615 Descriptor Size: 96 00:07:56.615 Reclaim Group Identifier format: 2 00:07:56.615 FDP Volatile Write Cache: Not Present 00:07:56.615 FDP Configuration: Valid 00:07:56.615 Vendor Specific Size: 0 00:07:56.615 Number of Reclaim Groups: 2 00:07:56.615 Number of Recalim Unit Handles: 8 00:07:56.615 Max Placement Identifiers: 128 00:07:56.615 Number of Namespaces Suppprted: 256 00:07:56.615 Reclaim unit Nominal Size: 6000000 bytes 00:07:56.615 Estimated Reclaim Unit Time Limit: Not Reported 00:07:56.615 RUH Desc #000: RUH Type: Initially Isolated 00:07:56.615 RUH Desc #001: RUH Type: Initially Isolated 00:07:56.615 RUH Desc #002: RUH Type: Initially Isolated 00:07:56.615 RUH Desc #003: RUH Type: Initially Isolated 00:07:56.615 RUH Desc #004: RUH Type: Initially Isolated 00:07:56.615 RUH Desc #005: RUH Type: Initially Isolated 00:07:56.615 RUH Desc #006: RUH Type: Initially Isolated 00:07:56.615 RUH Desc #007: RUH Type: Initially Isolated 00:07:56.615 00:07:56.615 FDP reclaim unit handle usage log page 00:07:56.615 ====================================== 00:07:56.615 Number of Reclaim Unit Handles: 8 00:07:56.615 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:56.615 RUH Usage Desc #001: RUH Attributes: Unused 00:07:56.615 RUH Usage Desc #002: RUH Attributes: Unused 00:07:56.615 RUH Usage Desc #003: RUH Attributes: Unused 00:07:56.615 RUH Usage Desc #004: RUH Attributes: Unused 00:07:56.615 RUH Usage Desc #005: RUH Attributes: Unused 00:07:56.615 RUH Usage Desc #006: RUH Attributes: Unused 00:07:56.615 RUH Usage Desc #007: RUH Attributes: Unused 00:07:56.615 00:07:56.615 FDP statistics log page 00:07:56.615 ======================= 00:07:56.615 Host bytes with metadata written: 447782912 00:07:56.615 Medited unexpected 00:07:56.615 [2024-11-17 08:08:01.413922] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63699 terminated unexpected 00:07:56.615 [2024-11-17 08:08:01.414690] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63699 terminated unexpected 00:07:56.615 [2024-11-17 08:08:01.416055] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63699 terminated unexpected 00:07:56.615 a bytes with metadata written: 447848448 00:07:56.615 Media bytes erased: 0 00:07:56.615 00:07:56.615 FDP events log page 00:07:56.615 =================== 00:07:56.615 Number of FDP events: 0 00:07:56.615 00:07:56.615 NVM Specific Namespace Data 00:07:56.615 =========================== 00:07:56.615 Logical Block Storage Tag Mask: 0 00:07:56.615 Protection Information Capabilities: 00:07:56.615 16b Guard Protection Information Storage Tag Support: No 00:07:56.615 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.615 Storage Tag Check Read Support: No 00:07:56.615 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.615 ===================================================== 00:07:56.615 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:56.615 ===================================================== 00:07:56.615 Controller Capabilities/Features 00:07:56.615 ================================ 00:07:56.615 Vendor ID: 1b36 00:07:56.615 Subsystem Vendor ID: 1af4 00:07:56.615 Serial Number: 12342 00:07:56.615 Model Number: QEMU NVMe Ctrl 00:07:56.615 Firmware Version: 8.0.0 00:07:56.615 Recommended Arb Burst: 6 00:07:56.615 IEEE OUI Identifier: 00 54 52 00:07:56.615 Multi-path I/O 00:07:56.615 May have multiple subsystem ports: No 00:07:56.615 May have multiple controllers: No 00:07:56.615 Associated with SR-IOV VF: No 00:07:56.615 Max Data Transfer Size: 524288 00:07:56.615 Max Number of Namespaces: 256 00:07:56.615 Max Number of I/O Queues: 64 00:07:56.615 NVMe Specification Version (VS): 1.4 00:07:56.615 NVMe Specification Version (Identify): 1.4 00:07:56.615 Maximum Queue Entries: 2048 00:07:56.615 Contiguous Queues Required: Yes 00:07:56.615 Arbitration Mechanisms Supported 00:07:56.615 Weighted Round Robin: Not Supported 00:07:56.615 Vendor Specific: Not Supported 00:07:56.615 Reset Timeout: 7500 ms 00:07:56.615 Doorbell Stride: 4 bytes 00:07:56.615 NVM Subsystem Reset: Not Supported 00:07:56.615 Command Sets Supported 00:07:56.615 NVM Command Set: Supported 00:07:56.615 Boot Partition: Not Supported 00:07:56.615 Memory Page Size Minimum: 4096 bytes 00:07:56.615 Memory Page Size Maximum: 65536 bytes 00:07:56.615 Persistent Memory Region: Not Supported 00:07:56.615 Optional Asynchronous Events Supported 00:07:56.615 Namespace Attribute Notices: Supported 00:07:56.615 Firmware Activation Notices: Not Supported 00:07:56.615 ANA Change Notices: Not Supported 00:07:56.615 PLE Aggregate Log Change Notices: Not Supported 00:07:56.615 LBA Status Info Alert Notices: Not Supported 00:07:56.615 EGE Aggregate Log Change Notices: Not Supported 00:07:56.615 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.615 Zone Descriptor Change Notices: Not Supported 00:07:56.615 Discovery Log Change Notices: Not Supported 00:07:56.615 Controller Attributes 00:07:56.615 128-bit Host Identifier: Not Supported 00:07:56.615 Non-Operational Permissive Mode: Not Supported 00:07:56.615 NVM Sets: Not Supported 00:07:56.615 Read Recovery Levels: Not Supported 00:07:56.615 Endurance Groups: Not Supported 00:07:56.615 Predictable Latency Mode: Not Supported 00:07:56.615 Traffic Based Keep ALive: Not Supported 00:07:56.616 Namespace Granularity: Not Supported 00:07:56.616 SQ Associations: Not Supported 00:07:56.616 UUID List: Not Supported 00:07:56.616 Multi-Domain Subsystem: Not Supported 00:07:56.616 Fixed Capacity Management: Not Supported 00:07:56.616 Variable Capacity Management: Not Supported 00:07:56.616 Delete Endurance Group: Not Supported 00:07:56.616 Delete NVM Set: Not Supported 00:07:56.616 Extended LBA Formats Supported: Supported 00:07:56.616 Flexible Data Placement Supported: Not Supported 00:07:56.616 00:07:56.616 Controller Memory Buffer Support 00:07:56.616 ================================ 00:07:56.616 Supported: No 00:07:56.616 00:07:56.616 Persistent Memory Region Support 00:07:56.616 ================================ 00:07:56.616 Supported: No 00:07:56.616 00:07:56.616 Admin Command Set Attributes 00:07:56.616 ============================ 00:07:56.616 Security Send/Receive: Not Supported 00:07:56.616 Format NVM: Supported 00:07:56.616 Firmware Activate/Download: Not Supported 00:07:56.616 Namespace Management: Supported 00:07:56.616 Device Self-Test: Not Supported 00:07:56.616 Directives: Supported 00:07:56.616 NVMe-MI: Not Supported 00:07:56.616 Virtualization Management: Not Supported 00:07:56.616 Doorbell Buffer Config: Supported 00:07:56.616 Get LBA Status Capability: Not Supported 00:07:56.616 Command & Feature Lockdown Capability: Not Supported 00:07:56.616 Abort Command Limit: 4 00:07:56.616 Async Event Request Limit: 4 00:07:56.616 Number of Firmware Slots: N/A 00:07:56.616 Firmware Slot 1 Read-Only: N/A 00:07:56.616 Firmware Activation Without Reset: N/A 00:07:56.616 Multiple Update Detection Support: N/A 00:07:56.616 Firmware Update Granularity: No Information Provided 00:07:56.616 Per-Namespace SMART Log: Yes 00:07:56.616 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.616 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:56.616 Command Effects Log Page: Supported 00:07:56.616 Get Log Page Extended Data: Supported 00:07:56.616 Telemetry Log Pages: Not Supported 00:07:56.616 Persistent Event Log Pages: Not Supported 00:07:56.616 Supported Log Pages Log Page: May Support 00:07:56.616 Commands Supported & Effects Log Page: Not Supported 00:07:56.616 Feature Identifiers & Effects Log Page:May Support 00:07:56.616 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.616 Data Area 4 for Telemetry Log: Not Supported 00:07:56.616 Error Log Page Entries Supported: 1 00:07:56.616 Keep Alive: Not Supported 00:07:56.616 00:07:56.616 NVM Command Set Attributes 00:07:56.616 ========================== 00:07:56.616 Submission Queue Entry Size 00:07:56.616 Max: 64 00:07:56.616 Min: 64 00:07:56.616 Completion Queue Entry Size 00:07:56.616 Max: 16 00:07:56.616 Min: 16 00:07:56.616 Number of Namespaces: 256 00:07:56.616 Compare Command: Supported 00:07:56.616 Write Uncorrectable Command: Not Supported 00:07:56.616 Dataset Management Command: Supported 00:07:56.616 Write Zeroes Command: Supported 00:07:56.616 Set Features Save Field: Supported 00:07:56.616 Reservations: Not Supported 00:07:56.616 Timestamp: Supported 00:07:56.616 Copy: Supported 00:07:56.616 Volatile Write Cache: Present 00:07:56.616 Atomic Write Unit (Normal): 1 00:07:56.616 Atomic Write Unit (PFail): 1 00:07:56.616 Atomic Compare & Write Unit: 1 00:07:56.616 Fused Compare & Write: Not Supported 00:07:56.616 Scatter-Gather List 00:07:56.616 SGL Command Set: Supported 00:07:56.616 SGL Keyed: Not Supported 00:07:56.616 SGL Bit Bucket Descriptor: Not Supported 00:07:56.616 SGL Metadata Pointer: Not Supported 00:07:56.616 Oversized SGL: Not Supported 00:07:56.616 SGL Metadata Address: Not Supported 00:07:56.616 SGL Offset: Not Supported 00:07:56.616 Transport SGL Data Block: Not Supported 00:07:56.616 Replay Protected Memory Block: Not Supported 00:07:56.616 00:07:56.616 Firmware Slot Information 00:07:56.616 ========================= 00:07:56.616 Active slot: 1 00:07:56.616 Slot 1 Firmware Revision: 1.0 00:07:56.616 00:07:56.616 00:07:56.616 Commands Supported and Effects 00:07:56.616 ============================== 00:07:56.616 Admin Commands 00:07:56.616 -------------- 00:07:56.616 Delete I/O Submission Queue (00h): Supported 00:07:56.616 Create I/O Submission Queue (01h): Supported 00:07:56.616 Get Log Page (02h): Supported 00:07:56.616 Delete I/O Completion Queue (04h): Supported 00:07:56.616 Create I/O Completion Queue (05h): Supported 00:07:56.616 Identify (06h): Supported 00:07:56.616 Abort (08h): Supported 00:07:56.616 Set Features (09h): Supported 00:07:56.616 Get Features (0Ah): Supported 00:07:56.616 Asynchronous Event Request (0Ch): Supported 00:07:56.616 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.616 Directive Send (19h): Supported 00:07:56.616 Directive Receive (1Ah): Supported 00:07:56.616 Virtualization Management (1Ch): Supported 00:07:56.616 Doorbell Buffer Config (7Ch): Supported 00:07:56.616 Format NVM (80h): Supported LBA-Change 00:07:56.616 I/O Commands 00:07:56.616 ------------ 00:07:56.616 Flush (00h): Supported LBA-Change 00:07:56.616 Write (01h): Supported LBA-Change 00:07:56.616 Read (02h): Supported 00:07:56.616 Compare (05h): Supported 00:07:56.616 Write Zeroes (08h): Supported LBA-Change 00:07:56.616 Dataset Management (09h): Supported LBA-Change 00:07:56.616 Unknown (0Ch): Supported 00:07:56.616 Unknown (12h): Supported 00:07:56.616 Copy (19h): Supported LBA-Change 00:07:56.616 Unknown (1Dh): Supported LBA-Change 00:07:56.616 00:07:56.616 Error Log 00:07:56.616 ========= 00:07:56.616 00:07:56.616 Arbitration 00:07:56.616 =========== 00:07:56.616 Arbitration Burst: no limit 00:07:56.616 00:07:56.616 Power Management 00:07:56.616 ================ 00:07:56.616 Number of Power States: 1 00:07:56.616 Current Power State: Power State #0 00:07:56.616 Power State #0: 00:07:56.616 Max Power: 25.00 W 00:07:56.616 Non-Operational State: Operational 00:07:56.616 Entry Latency: 16 microseconds 00:07:56.616 Exit Latency: 4 microseconds 00:07:56.616 Relative Read Throughput: 0 00:07:56.616 Relative Read Latency: 0 00:07:56.616 Relative Write Throughput: 0 00:07:56.616 Relative Write Latency: 0 00:07:56.616 Idle Power: Not Reported 00:07:56.616 Active Power: Not Reported 00:07:56.616 Non-Operational Permissive Mode: Not Supported 00:07:56.616 00:07:56.616 Health Information 00:07:56.616 ================== 00:07:56.616 Critical Warnings: 00:07:56.616 Available Spare Space: OK 00:07:56.616 Temperature: OK 00:07:56.616 Device Reliability: OK 00:07:56.616 Read Only: No 00:07:56.616 Volatile Memory Backup: OK 00:07:56.616 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.616 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.616 Available Spare: 0% 00:07:56.616 Available Spare Threshold: 0% 00:07:56.616 Life Percentage Used: 0% 00:07:56.616 Data Units Read: 2159 00:07:56.616 Data Units Written: 1946 00:07:56.616 Host Read Commands: 100205 00:07:56.616 Host Write Commands: 98474 00:07:56.616 Controller Busy Time: 0 minutes 00:07:56.616 Power Cycles: 0 00:07:56.616 Power On Hours: 0 hours 00:07:56.616 Unsafe Shutdowns: 0 00:07:56.616 Unrecoverable Media Errors: 0 00:07:56.616 Lifetime Error Log Entries: 0 00:07:56.616 Warning Temperature Time: 0 minutes 00:07:56.616 Critical Temperature Time: 0 minutes 00:07:56.616 00:07:56.616 Number of Queues 00:07:56.616 ================ 00:07:56.616 Number of I/O Submission Queues: 64 00:07:56.616 Number of I/O Completion Queues: 64 00:07:56.616 00:07:56.616 ZNS Specific Controller Data 00:07:56.616 ============================ 00:07:56.616 Zone Append Size Limit: 0 00:07:56.616 00:07:56.616 00:07:56.616 Active Namespaces 00:07:56.616 ================= 00:07:56.616 Namespace ID:1 00:07:56.616 Error Recovery Timeout: Unlimited 00:07:56.616 Command Set Identifier: NVM (00h) 00:07:56.616 Deallocate: Supported 00:07:56.616 Deallocated/Unwritten Error: Supported 00:07:56.616 Deallocated Read Value: All 0x00 00:07:56.616 Deallocate in Write Zeroes: Not Supported 00:07:56.616 Deallocated Guard Field: 0xFFFF 00:07:56.616 Flush: Supported 00:07:56.616 Reservation: Not Supported 00:07:56.616 Namespace Sharing Capabilities: Private 00:07:56.616 Size (in LBAs): 1048576 (4GiB) 00:07:56.616 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.616 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.616 Thin Provisioning: Not Supported 00:07:56.616 Per-NS Atomic Units: No 00:07:56.616 Maximum Single Source Range Length: 128 00:07:56.616 Maximum Copy Length: 128 00:07:56.616 Maximum Source Range Count: 128 00:07:56.617 NGUID/EUI64 Never Reused: No 00:07:56.617 Namespace Write Protected: No 00:07:56.617 Number of LBA Formats: 8 00:07:56.617 Current LBA Format: LBA Format #04 00:07:56.617 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.617 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.617 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.617 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.617 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.617 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.617 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.617 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.617 00:07:56.617 NVM Specific Namespace Data 00:07:56.617 =========================== 00:07:56.617 Logical Block Storage Tag Mask: 0 00:07:56.617 Protection Information Capabilities: 00:07:56.617 16b Guard Protection Information Storage Tag Support: No 00:07:56.617 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.617 Storage Tag Check Read Support: No 00:07:56.617 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Namespace ID:2 00:07:56.617 Error Recovery Timeout: Unlimited 00:07:56.617 Command Set Identifier: NVM (00h) 00:07:56.617 Deallocate: Supported 00:07:56.617 Deallocated/Unwritten Error: Supported 00:07:56.617 Deallocated Read Value: All 0x00 00:07:56.617 Deallocate in Write Zeroes: Not Supported 00:07:56.617 Deallocated Guard Field: 0xFFFF 00:07:56.617 Flush: Supported 00:07:56.617 Reservation: Not Supported 00:07:56.617 Namespace Sharing Capabilities: Private 00:07:56.617 Size (in LBAs): 1048576 (4GiB) 00:07:56.617 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.617 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.617 Thin Provisioning: Not Supported 00:07:56.617 Per-NS Atomic Units: No 00:07:56.617 Maximum Single Source Range Length: 128 00:07:56.617 Maximum Copy Length: 128 00:07:56.617 Maximum Source Range Count: 128 00:07:56.617 NGUID/EUI64 Never Reused: No 00:07:56.617 Namespace Write Protected: No 00:07:56.617 Number of LBA Formats: 8 00:07:56.617 Current LBA Format: LBA Format #04 00:07:56.617 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.617 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.617 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.617 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.617 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.617 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.617 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.617 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.617 00:07:56.617 NVM Specific Namespace Data 00:07:56.617 =========================== 00:07:56.617 Logical Block Storage Tag Mask: 0 00:07:56.617 Protection Information Capabilities: 00:07:56.617 16b Guard Protection Information Storage Tag Support: No 00:07:56.617 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.617 Storage Tag Check Read Support: No 00:07:56.617 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Namespace ID:3 00:07:56.617 Error Recovery Timeout: Unlimited 00:07:56.617 Command Set Identifier: NVM (00h) 00:07:56.617 Deallocate: Supported 00:07:56.617 Deallocated/Unwritten Error: Supported 00:07:56.617 Deallocated Read Value: All 0x00 00:07:56.617 Deallocate in Write Zeroes: Not Supported 00:07:56.617 Deallocated Guard Field: 0xFFFF 00:07:56.617 Flush: Supported 00:07:56.617 Reservation: Not Supported 00:07:56.617 Namespace Sharing Capabilities: Private 00:07:56.617 Size (in LBAs): 1048576 (4GiB) 00:07:56.617 Capacity (in LBAs): 1048576 (4GiB) 00:07:56.617 Utilization (in LBAs): 1048576 (4GiB) 00:07:56.617 Thin Provisioning: Not Supported 00:07:56.617 Per-NS Atomic Units: No 00:07:56.617 Maximum Single Source Range Length: 128 00:07:56.617 Maximum Copy Length: 128 00:07:56.617 Maximum Source Range Count: 128 00:07:56.617 NGUID/EUI64 Never Reused: No 00:07:56.617 Namespace Write Protected: No 00:07:56.617 Number of LBA Formats: 8 00:07:56.617 Current LBA Format: LBA Format #04 00:07:56.617 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.617 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.617 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.617 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.617 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.617 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.617 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.617 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.617 00:07:56.617 NVM Specific Namespace Data 00:07:56.617 =========================== 00:07:56.617 Logical Block Storage Tag Mask: 0 00:07:56.617 Protection Information Capabilities: 00:07:56.617 16b Guard Protection Information Storage Tag Support: No 00:07:56.617 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.617 Storage Tag Check Read Support: No 00:07:56.617 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.617 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.617 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:56.877 ===================================================== 00:07:56.877 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:56.877 ===================================================== 00:07:56.877 Controller Capabilities/Features 00:07:56.877 ================================ 00:07:56.877 Vendor ID: 1b36 00:07:56.877 Subsystem Vendor ID: 1af4 00:07:56.877 Serial Number: 12340 00:07:56.877 Model Number: QEMU NVMe Ctrl 00:07:56.877 Firmware Version: 8.0.0 00:07:56.877 Recommended Arb Burst: 6 00:07:56.877 IEEE OUI Identifier: 00 54 52 00:07:56.877 Multi-path I/O 00:07:56.877 May have multiple subsystem ports: No 00:07:56.877 May have multiple controllers: No 00:07:56.877 Associated with SR-IOV VF: No 00:07:56.877 Max Data Transfer Size: 524288 00:07:56.877 Max Number of Namespaces: 256 00:07:56.877 Max Number of I/O Queues: 64 00:07:56.877 NVMe Specification Version (VS): 1.4 00:07:56.877 NVMe Specification Version (Identify): 1.4 00:07:56.877 Maximum Queue Entries: 2048 00:07:56.877 Contiguous Queues Required: Yes 00:07:56.877 Arbitration Mechanisms Supported 00:07:56.877 Weighted Round Robin: Not Supported 00:07:56.877 Vendor Specific: Not Supported 00:07:56.877 Reset Timeout: 7500 ms 00:07:56.877 Doorbell Stride: 4 bytes 00:07:56.877 NVM Subsystem Reset: Not Supported 00:07:56.877 Command Sets Supported 00:07:56.877 NVM Command Set: Supported 00:07:56.877 Boot Partition: Not Supported 00:07:56.877 Memory Page Size Minimum: 4096 bytes 00:07:56.877 Memory Page Size Maximum: 65536 bytes 00:07:56.877 Persistent Memory Region: Not Supported 00:07:56.877 Optional Asynchronous Events Supported 00:07:56.877 Namespace Attribute Notices: Supported 00:07:56.877 Firmware Activation Notices: Not Supported 00:07:56.877 ANA Change Notices: Not Supported 00:07:56.877 PLE Aggregate Log Change Notices: Not Supported 00:07:56.877 LBA Status Info Alert Notices: Not Supported 00:07:56.877 EGE Aggregate Log Change Notices: Not Supported 00:07:56.877 Normal NVM Subsystem Shutdown event: Not Supported 00:07:56.877 Zone Descriptor Change Notices: Not Supported 00:07:56.877 Discovery Log Change Notices: Not Supported 00:07:56.877 Controller Attributes 00:07:56.877 128-bit Host Identifier: Not Supported 00:07:56.877 Non-Operational Permissive Mode: Not Supported 00:07:56.877 NVM Sets: Not Supported 00:07:56.877 Read Recovery Levels: Not Supported 00:07:56.877 Endurance Groups: Not Supported 00:07:56.877 Predictable Latency Mode: Not Supported 00:07:56.877 Traffic Based Keep ALive: Not Supported 00:07:56.877 Namespace Granularity: Not Supported 00:07:56.877 SQ Associations: Not Supported 00:07:56.877 UUID List: Not Supported 00:07:56.877 Multi-Domain Subsystem: Not Supported 00:07:56.877 Fixed Capacity Management: Not Supported 00:07:56.877 Variable Capacity Management: Not Supported 00:07:56.877 Delete Endurance Group: Not Supported 00:07:56.877 Delete NVM Set: Not Supported 00:07:56.877 Extended LBA Formats Supported: Supported 00:07:56.877 Flexible Data Placement Supported: Not Supported 00:07:56.877 00:07:56.878 Controller Memory Buffer Support 00:07:56.878 ================================ 00:07:56.878 Supported: No 00:07:56.878 00:07:56.878 Persistent Memory Region Support 00:07:56.878 ================================ 00:07:56.878 Supported: No 00:07:56.878 00:07:56.878 Admin Command Set Attributes 00:07:56.878 ============================ 00:07:56.878 Security Send/Receive: Not Supported 00:07:56.878 Format NVM: Supported 00:07:56.878 Firmware Activate/Download: Not Supported 00:07:56.878 Namespace Management: Supported 00:07:56.878 Device Self-Test: Not Supported 00:07:56.878 Directives: Supported 00:07:56.878 NVMe-MI: Not Supported 00:07:56.878 Virtualization Management: Not Supported 00:07:56.878 Doorbell Buffer Config: Supported 00:07:56.878 Get LBA Status Capability: Not Supported 00:07:56.878 Command & Feature Lockdown Capability: Not Supported 00:07:56.878 Abort Command Limit: 4 00:07:56.878 Async Event Request Limit: 4 00:07:56.878 Number of Firmware Slots: N/A 00:07:56.878 Firmware Slot 1 Read-Only: N/A 00:07:56.878 Firmware Activation Without Reset: N/A 00:07:56.878 Multiple Update Detection Support: N/A 00:07:56.878 Firmware Update Granularity: No Information Provided 00:07:56.878 Per-Namespace SMART Log: Yes 00:07:56.878 Asymmetric Namespace Access Log Page: Not Supported 00:07:56.878 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:56.878 Command Effects Log Page: Supported 00:07:56.878 Get Log Page Extended Data: Supported 00:07:56.878 Telemetry Log Pages: Not Supported 00:07:56.878 Persistent Event Log Pages: Not Supported 00:07:56.878 Supported Log Pages Log Page: May Support 00:07:56.878 Commands Supported & Effects Log Page: Not Supported 00:07:56.878 Feature Identifiers & Effects Log Page:May Support 00:07:56.878 NVMe-MI Commands & Effects Log Page: May Support 00:07:56.878 Data Area 4 for Telemetry Log: Not Supported 00:07:56.878 Error Log Page Entries Supported: 1 00:07:56.878 Keep Alive: Not Supported 00:07:56.878 00:07:56.878 NVM Command Set Attributes 00:07:56.878 ========================== 00:07:56.878 Submission Queue Entry Size 00:07:56.878 Max: 64 00:07:56.878 Min: 64 00:07:56.878 Completion Queue Entry Size 00:07:56.878 Max: 16 00:07:56.878 Min: 16 00:07:56.878 Number of Namespaces: 256 00:07:56.878 Compare Command: Supported 00:07:56.878 Write Uncorrectable Command: Not Supported 00:07:56.878 Dataset Management Command: Supported 00:07:56.878 Write Zeroes Command: Supported 00:07:56.878 Set Features Save Field: Supported 00:07:56.878 Reservations: Not Supported 00:07:56.878 Timestamp: Supported 00:07:56.878 Copy: Supported 00:07:56.878 Volatile Write Cache: Present 00:07:56.878 Atomic Write Unit (Normal): 1 00:07:56.878 Atomic Write Unit (PFail): 1 00:07:56.878 Atomic Compare & Write Unit: 1 00:07:56.878 Fused Compare & Write: Not Supported 00:07:56.878 Scatter-Gather List 00:07:56.878 SGL Command Set: Supported 00:07:56.878 SGL Keyed: Not Supported 00:07:56.878 SGL Bit Bucket Descriptor: Not Supported 00:07:56.878 SGL Metadata Pointer: Not Supported 00:07:56.878 Oversized SGL: Not Supported 00:07:56.878 SGL Metadata Address: Not Supported 00:07:56.878 SGL Offset: Not Supported 00:07:56.878 Transport SGL Data Block: Not Supported 00:07:56.878 Replay Protected Memory Block: Not Supported 00:07:56.878 00:07:56.878 Firmware Slot Information 00:07:56.878 ========================= 00:07:56.878 Active slot: 1 00:07:56.878 Slot 1 Firmware Revision: 1.0 00:07:56.878 00:07:56.878 00:07:56.878 Commands Supported and Effects 00:07:56.878 ============================== 00:07:56.878 Admin Commands 00:07:56.878 -------------- 00:07:56.878 Delete I/O Submission Queue (00h): Supported 00:07:56.878 Create I/O Submission Queue (01h): Supported 00:07:56.878 Get Log Page (02h): Supported 00:07:56.878 Delete I/O Completion Queue (04h): Supported 00:07:56.878 Create I/O Completion Queue (05h): Supported 00:07:56.878 Identify (06h): Supported 00:07:56.878 Abort (08h): Supported 00:07:56.878 Set Features (09h): Supported 00:07:56.878 Get Features (0Ah): Supported 00:07:56.878 Asynchronous Event Request (0Ch): Supported 00:07:56.878 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:56.878 Directive Send (19h): Supported 00:07:56.878 Directive Receive (1Ah): Supported 00:07:56.878 Virtualization Management (1Ch): Supported 00:07:56.878 Doorbell Buffer Config (7Ch): Supported 00:07:56.878 Format NVM (80h): Supported LBA-Change 00:07:56.878 I/O Commands 00:07:56.878 ------------ 00:07:56.878 Flush (00h): Supported LBA-Change 00:07:56.878 Write (01h): Supported LBA-Change 00:07:56.878 Read (02h): Supported 00:07:56.878 Compare (05h): Supported 00:07:56.878 Write Zeroes (08h): Supported LBA-Change 00:07:56.878 Dataset Management (09h): Supported LBA-Change 00:07:56.878 Unknown (0Ch): Supported 00:07:56.878 Unknown (12h): Supported 00:07:56.878 Copy (19h): Supported LBA-Change 00:07:56.878 Unknown (1Dh): Supported LBA-Change 00:07:56.878 00:07:56.878 Error Log 00:07:56.878 ========= 00:07:56.878 00:07:56.878 Arbitration 00:07:56.878 =========== 00:07:56.878 Arbitration Burst: no limit 00:07:56.878 00:07:56.878 Power Management 00:07:56.878 ================ 00:07:56.878 Number of Power States: 1 00:07:56.878 Current Power State: Power State #0 00:07:56.878 Power State #0: 00:07:56.878 Max Power: 25.00 W 00:07:56.878 Non-Operational State: Operational 00:07:56.878 Entry Latency: 16 microseconds 00:07:56.878 Exit Latency: 4 microseconds 00:07:56.878 Relative Read Throughput: 0 00:07:56.878 Relative Read Latency: 0 00:07:56.878 Relative Write Throughput: 0 00:07:56.878 Relative Write Latency: 0 00:07:56.878 Idle Power: Not Reported 00:07:56.878 Active Power: Not Reported 00:07:56.878 Non-Operational Permissive Mode: Not Supported 00:07:56.878 00:07:56.878 Health Information 00:07:56.878 ================== 00:07:56.878 Critical Warnings: 00:07:56.878 Available Spare Space: OK 00:07:56.878 Temperature: OK 00:07:56.878 Device Reliability: OK 00:07:56.878 Read Only: No 00:07:56.878 Volatile Memory Backup: OK 00:07:56.878 Current Temperature: 323 Kelvin (50 Celsius) 00:07:56.878 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:56.878 Available Spare: 0% 00:07:56.878 Available Spare Threshold: 0% 00:07:56.878 Life Percentage Used: 0% 00:07:56.878 Data Units Read: 679 00:07:56.878 Data Units Written: 607 00:07:56.878 Host Read Commands: 32562 00:07:56.878 Host Write Commands: 32348 00:07:56.878 Controller Busy Time: 0 minutes 00:07:56.878 Power Cycles: 0 00:07:56.878 Power On Hours: 0 hours 00:07:56.878 Unsafe Shutdowns: 0 00:07:56.878 Unrecoverable Media Errors: 0 00:07:56.878 Lifetime Error Log Entries: 0 00:07:56.878 Warning Temperature Time: 0 minutes 00:07:56.878 Critical Temperature Time: 0 minutes 00:07:56.878 00:07:56.878 Number of Queues 00:07:56.878 ================ 00:07:56.878 Number of I/O Submission Queues: 64 00:07:56.878 Number of I/O Completion Queues: 64 00:07:56.878 00:07:56.878 ZNS Specific Controller Data 00:07:56.879 ============================ 00:07:56.879 Zone Append Size Limit: 0 00:07:56.879 00:07:56.879 00:07:56.879 Active Namespaces 00:07:56.879 ================= 00:07:56.879 Namespace ID:1 00:07:56.879 Error Recovery Timeout: Unlimited 00:07:56.879 Command Set Identifier: NVM (00h) 00:07:56.879 Deallocate: Supported 00:07:56.879 Deallocated/Unwritten Error: Supported 00:07:56.879 Deallocated Read Value: All 0x00 00:07:56.879 Deallocate in Write Zeroes: Not Supported 00:07:56.879 Deallocated Guard Field: 0xFFFF 00:07:56.879 Flush: Supported 00:07:56.879 Reservation: Not Supported 00:07:56.879 Metadata Transferred as: Separate Metadata Buffer 00:07:56.879 Namespace Sharing Capabilities: Private 00:07:56.879 Size (in LBAs): 1548666 (5GiB) 00:07:56.879 Capacity (in LBAs): 1548666 (5GiB) 00:07:56.879 Utilization (in LBAs): 1548666 (5GiB) 00:07:56.879 Thin Provisioning: Not Supported 00:07:56.879 Per-NS Atomic Units: No 00:07:56.879 Maximum Single Source Range Length: 128 00:07:56.879 Maximum Copy Length: 128 00:07:56.879 Maximum Source Range Count: 128 00:07:56.879 NGUID/EUI64 Never Reused: No 00:07:56.879 Namespace Write Protected: No 00:07:56.879 Number of LBA Formats: 8 00:07:56.879 Current LBA Format: LBA Format #07 00:07:56.879 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:56.879 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:56.879 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:56.879 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:56.879 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:56.879 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:56.879 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:56.879 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:56.879 00:07:56.879 NVM Specific Namespace Data 00:07:56.879 =========================== 00:07:56.879 Logical Block Storage Tag Mask: 0 00:07:56.879 Protection Information Capabilities: 00:07:56.879 16b Guard Protection Information Storage Tag Support: No 00:07:56.879 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:56.879 Storage Tag Check Read Support: No 00:07:56.879 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:56.879 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:56.879 08:08:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:57.139 ===================================================== 00:07:57.139 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:57.139 ===================================================== 00:07:57.139 Controller Capabilities/Features 00:07:57.139 ================================ 00:07:57.139 Vendor ID: 1b36 00:07:57.139 Subsystem Vendor ID: 1af4 00:07:57.139 Serial Number: 12341 00:07:57.139 Model Number: QEMU NVMe Ctrl 00:07:57.139 Firmware Version: 8.0.0 00:07:57.139 Recommended Arb Burst: 6 00:07:57.139 IEEE OUI Identifier: 00 54 52 00:07:57.139 Multi-path I/O 00:07:57.139 May have multiple subsystem ports: No 00:07:57.139 May have multiple controllers: No 00:07:57.139 Associated with SR-IOV VF: No 00:07:57.139 Max Data Transfer Size: 524288 00:07:57.139 Max Number of Namespaces: 256 00:07:57.139 Max Number of I/O Queues: 64 00:07:57.139 NVMe Specification Version (VS): 1.4 00:07:57.139 NVMe Specification Version (Identify): 1.4 00:07:57.139 Maximum Queue Entries: 2048 00:07:57.139 Contiguous Queues Required: Yes 00:07:57.139 Arbitration Mechanisms Supported 00:07:57.139 Weighted Round Robin: Not Supported 00:07:57.139 Vendor Specific: Not Supported 00:07:57.139 Reset Timeout: 7500 ms 00:07:57.139 Doorbell Stride: 4 bytes 00:07:57.139 NVM Subsystem Reset: Not Supported 00:07:57.139 Command Sets Supported 00:07:57.139 NVM Command Set: Supported 00:07:57.139 Boot Partition: Not Supported 00:07:57.139 Memory Page Size Minimum: 4096 bytes 00:07:57.139 Memory Page Size Maximum: 65536 bytes 00:07:57.139 Persistent Memory Region: Not Supported 00:07:57.139 Optional Asynchronous Events Supported 00:07:57.139 Namespace Attribute Notices: Supported 00:07:57.139 Firmware Activation Notices: Not Supported 00:07:57.139 ANA Change Notices: Not Supported 00:07:57.139 PLE Aggregate Log Change Notices: Not Supported 00:07:57.139 LBA Status Info Alert Notices: Not Supported 00:07:57.139 EGE Aggregate Log Change Notices: Not Supported 00:07:57.139 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.139 Zone Descriptor Change Notices: Not Supported 00:07:57.139 Discovery Log Change Notices: Not Supported 00:07:57.139 Controller Attributes 00:07:57.139 128-bit Host Identifier: Not Supported 00:07:57.139 Non-Operational Permissive Mode: Not Supported 00:07:57.139 NVM Sets: Not Supported 00:07:57.139 Read Recovery Levels: Not Supported 00:07:57.139 Endurance Groups: Not Supported 00:07:57.139 Predictable Latency Mode: Not Supported 00:07:57.139 Traffic Based Keep ALive: Not Supported 00:07:57.139 Namespace Granularity: Not Supported 00:07:57.139 SQ Associations: Not Supported 00:07:57.139 UUID List: Not Supported 00:07:57.139 Multi-Domain Subsystem: Not Supported 00:07:57.139 Fixed Capacity Management: Not Supported 00:07:57.139 Variable Capacity Management: Not Supported 00:07:57.139 Delete Endurance Group: Not Supported 00:07:57.139 Delete NVM Set: Not Supported 00:07:57.139 Extended LBA Formats Supported: Supported 00:07:57.139 Flexible Data Placement Supported: Not Supported 00:07:57.139 00:07:57.139 Controller Memory Buffer Support 00:07:57.139 ================================ 00:07:57.139 Supported: No 00:07:57.139 00:07:57.139 Persistent Memory Region Support 00:07:57.139 ================================ 00:07:57.139 Supported: No 00:07:57.139 00:07:57.139 Admin Command Set Attributes 00:07:57.139 ============================ 00:07:57.139 Security Send/Receive: Not Supported 00:07:57.139 Format NVM: Supported 00:07:57.139 Firmware Activate/Download: Not Supported 00:07:57.139 Namespace Management: Supported 00:07:57.139 Device Self-Test: Not Supported 00:07:57.139 Directives: Supported 00:07:57.139 NVMe-MI: Not Supported 00:07:57.139 Virtualization Management: Not Supported 00:07:57.139 Doorbell Buffer Config: Supported 00:07:57.139 Get LBA Status Capability: Not Supported 00:07:57.139 Command & Feature Lockdown Capability: Not Supported 00:07:57.139 Abort Command Limit: 4 00:07:57.139 Async Event Request Limit: 4 00:07:57.139 Number of Firmware Slots: N/A 00:07:57.139 Firmware Slot 1 Read-Only: N/A 00:07:57.139 Firmware Activation Without Reset: N/A 00:07:57.139 Multiple Update Detection Support: N/A 00:07:57.139 Firmware Update Granularity: No Information Provided 00:07:57.139 Per-Namespace SMART Log: Yes 00:07:57.139 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.139 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:57.139 Command Effects Log Page: Supported 00:07:57.139 Get Log Page Extended Data: Supported 00:07:57.139 Telemetry Log Pages: Not Supported 00:07:57.139 Persistent Event Log Pages: Not Supported 00:07:57.139 Supported Log Pages Log Page: May Support 00:07:57.139 Commands Supported & Effects Log Page: Not Supported 00:07:57.139 Feature Identifiers & Effects Log Page:May Support 00:07:57.139 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.139 Data Area 4 for Telemetry Log: Not Supported 00:07:57.139 Error Log Page Entries Supported: 1 00:07:57.139 Keep Alive: Not Supported 00:07:57.139 00:07:57.139 NVM Command Set Attributes 00:07:57.139 ========================== 00:07:57.139 Submission Queue Entry Size 00:07:57.139 Max: 64 00:07:57.139 Min: 64 00:07:57.139 Completion Queue Entry Size 00:07:57.139 Max: 16 00:07:57.139 Min: 16 00:07:57.139 Number of Namespaces: 256 00:07:57.139 Compare Command: Supported 00:07:57.139 Write Uncorrectable Command: Not Supported 00:07:57.139 Dataset Management Command: Supported 00:07:57.139 Write Zeroes Command: Supported 00:07:57.139 Set Features Save Field: Supported 00:07:57.139 Reservations: Not Supported 00:07:57.139 Timestamp: Supported 00:07:57.139 Copy: Supported 00:07:57.139 Volatile Write Cache: Present 00:07:57.139 Atomic Write Unit (Normal): 1 00:07:57.139 Atomic Write Unit (PFail): 1 00:07:57.139 Atomic Compare & Write Unit: 1 00:07:57.139 Fused Compare & Write: Not Supported 00:07:57.139 Scatter-Gather List 00:07:57.139 SGL Command Set: Supported 00:07:57.139 SGL Keyed: Not Supported 00:07:57.139 SGL Bit Bucket Descriptor: Not Supported 00:07:57.139 SGL Metadata Pointer: Not Supported 00:07:57.139 Oversized SGL: Not Supported 00:07:57.139 SGL Metadata Address: Not Supported 00:07:57.139 SGL Offset: Not Supported 00:07:57.139 Transport SGL Data Block: Not Supported 00:07:57.139 Replay Protected Memory Block: Not Supported 00:07:57.139 00:07:57.139 Firmware Slot Information 00:07:57.139 ========================= 00:07:57.139 Active slot: 1 00:07:57.139 Slot 1 Firmware Revision: 1.0 00:07:57.139 00:07:57.139 00:07:57.139 Commands Supported and Effects 00:07:57.139 ============================== 00:07:57.139 Admin Commands 00:07:57.139 -------------- 00:07:57.139 Delete I/O Submission Queue (00h): Supported 00:07:57.139 Create I/O Submission Queue (01h): Supported 00:07:57.139 Get Log Page (02h): Supported 00:07:57.139 Delete I/O Completion Queue (04h): Supported 00:07:57.140 Create I/O Completion Queue (05h): Supported 00:07:57.140 Identify (06h): Supported 00:07:57.140 Abort (08h): Supported 00:07:57.140 Set Features (09h): Supported 00:07:57.140 Get Features (0Ah): Supported 00:07:57.140 Asynchronous Event Request (0Ch): Supported 00:07:57.140 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.140 Directive Send (19h): Supported 00:07:57.140 Directive Receive (1Ah): Supported 00:07:57.140 Virtualization Management (1Ch): Supported 00:07:57.140 Doorbell Buffer Config (7Ch): Supported 00:07:57.140 Format NVM (80h): Supported LBA-Change 00:07:57.140 I/O Commands 00:07:57.140 ------------ 00:07:57.140 Flush (00h): Supported LBA-Change 00:07:57.140 Write (01h): Supported LBA-Change 00:07:57.140 Read (02h): Supported 00:07:57.140 Compare (05h): Supported 00:07:57.140 Write Zeroes (08h): Supported LBA-Change 00:07:57.140 Dataset Management (09h): Supported LBA-Change 00:07:57.140 Unknown (0Ch): Supported 00:07:57.140 Unknown (12h): Supported 00:07:57.140 Copy (19h): Supported LBA-Change 00:07:57.140 Unknown (1Dh): Supported LBA-Change 00:07:57.140 00:07:57.140 Error Log 00:07:57.140 ========= 00:07:57.140 00:07:57.140 Arbitration 00:07:57.140 =========== 00:07:57.140 Arbitration Burst: no limit 00:07:57.140 00:07:57.140 Power Management 00:07:57.140 ================ 00:07:57.140 Number of Power States: 1 00:07:57.140 Current Power State: Power State #0 00:07:57.140 Power State #0: 00:07:57.140 Max Power: 25.00 W 00:07:57.140 Non-Operational State: Operational 00:07:57.140 Entry Latency: 16 microseconds 00:07:57.140 Exit Latency: 4 microseconds 00:07:57.140 Relative Read Throughput: 0 00:07:57.140 Relative Read Latency: 0 00:07:57.140 Relative Write Throughput: 0 00:07:57.140 Relative Write Latency: 0 00:07:57.140 Idle Power: Not Reported 00:07:57.140 Active Power: Not Reported 00:07:57.140 Non-Operational Permissive Mode: Not Supported 00:07:57.140 00:07:57.140 Health Information 00:07:57.140 ================== 00:07:57.140 Critical Warnings: 00:07:57.140 Available Spare Space: OK 00:07:57.140 Temperature: OK 00:07:57.140 Device Reliability: OK 00:07:57.140 Read Only: No 00:07:57.140 Volatile Memory Backup: OK 00:07:57.140 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.140 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.140 Available Spare: 0% 00:07:57.140 Available Spare Threshold: 0% 00:07:57.140 Life Percentage Used: 0% 00:07:57.140 Data Units Read: 1034 00:07:57.140 Data Units Written: 901 00:07:57.140 Host Read Commands: 48758 00:07:57.140 Host Write Commands: 47551 00:07:57.140 Controller Busy Time: 0 minutes 00:07:57.140 Power Cycles: 0 00:07:57.140 Power On Hours: 0 hours 00:07:57.140 Unsafe Shutdowns: 0 00:07:57.140 Unrecoverable Media Errors: 0 00:07:57.140 Lifetime Error Log Entries: 0 00:07:57.140 Warning Temperature Time: 0 minutes 00:07:57.140 Critical Temperature Time: 0 minutes 00:07:57.140 00:07:57.140 Number of Queues 00:07:57.140 ================ 00:07:57.140 Number of I/O Submission Queues: 64 00:07:57.140 Number of I/O Completion Queues: 64 00:07:57.140 00:07:57.140 ZNS Specific Controller Data 00:07:57.140 ============================ 00:07:57.140 Zone Append Size Limit: 0 00:07:57.140 00:07:57.140 00:07:57.140 Active Namespaces 00:07:57.140 ================= 00:07:57.140 Namespace ID:1 00:07:57.140 Error Recovery Timeout: Unlimited 00:07:57.140 Command Set Identifier: NVM (00h) 00:07:57.140 Deallocate: Supported 00:07:57.140 Deallocated/Unwritten Error: Supported 00:07:57.140 Deallocated Read Value: All 0x00 00:07:57.140 Deallocate in Write Zeroes: Not Supported 00:07:57.140 Deallocated Guard Field: 0xFFFF 00:07:57.140 Flush: Supported 00:07:57.140 Reservation: Not Supported 00:07:57.140 Namespace Sharing Capabilities: Private 00:07:57.140 Size (in LBAs): 1310720 (5GiB) 00:07:57.140 Capacity (in LBAs): 1310720 (5GiB) 00:07:57.140 Utilization (in LBAs): 1310720 (5GiB) 00:07:57.140 Thin Provisioning: Not Supported 00:07:57.140 Per-NS Atomic Units: No 00:07:57.140 Maximum Single Source Range Length: 128 00:07:57.140 Maximum Copy Length: 128 00:07:57.140 Maximum Source Range Count: 128 00:07:57.140 NGUID/EUI64 Never Reused: No 00:07:57.140 Namespace Write Protected: No 00:07:57.140 Number of LBA Formats: 8 00:07:57.140 Current LBA Format: LBA Format #04 00:07:57.140 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.140 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.140 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.140 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.140 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.140 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.140 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.140 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.140 00:07:57.140 NVM Specific Namespace Data 00:07:57.140 =========================== 00:07:57.140 Logical Block Storage Tag Mask: 0 00:07:57.140 Protection Information Capabilities: 00:07:57.140 16b Guard Protection Information Storage Tag Support: No 00:07:57.140 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.140 Storage Tag Check Read Support: No 00:07:57.140 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.140 08:08:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.140 08:08:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:57.400 ===================================================== 00:07:57.400 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:57.400 ===================================================== 00:07:57.400 Controller Capabilities/Features 00:07:57.400 ================================ 00:07:57.400 Vendor ID: 1b36 00:07:57.400 Subsystem Vendor ID: 1af4 00:07:57.400 Serial Number: 12342 00:07:57.400 Model Number: QEMU NVMe Ctrl 00:07:57.400 Firmware Version: 8.0.0 00:07:57.400 Recommended Arb Burst: 6 00:07:57.400 IEEE OUI Identifier: 00 54 52 00:07:57.400 Multi-path I/O 00:07:57.400 May have multiple subsystem ports: No 00:07:57.400 May have multiple controllers: No 00:07:57.400 Associated with SR-IOV VF: No 00:07:57.400 Max Data Transfer Size: 524288 00:07:57.400 Max Number of Namespaces: 256 00:07:57.400 Max Number of I/O Queues: 64 00:07:57.400 NVMe Specification Version (VS): 1.4 00:07:57.400 NVMe Specification Version (Identify): 1.4 00:07:57.400 Maximum Queue Entries: 2048 00:07:57.400 Contiguous Queues Required: Yes 00:07:57.400 Arbitration Mechanisms Supported 00:07:57.400 Weighted Round Robin: Not Supported 00:07:57.400 Vendor Specific: Not Supported 00:07:57.400 Reset Timeout: 7500 ms 00:07:57.400 Doorbell Stride: 4 bytes 00:07:57.400 NVM Subsystem Reset: Not Supported 00:07:57.400 Command Sets Supported 00:07:57.400 NVM Command Set: Supported 00:07:57.400 Boot Partition: Not Supported 00:07:57.400 Memory Page Size Minimum: 4096 bytes 00:07:57.400 Memory Page Size Maximum: 65536 bytes 00:07:57.400 Persistent Memory Region: Not Supported 00:07:57.400 Optional Asynchronous Events Supported 00:07:57.400 Namespace Attribute Notices: Supported 00:07:57.400 Firmware Activation Notices: Not Supported 00:07:57.400 ANA Change Notices: Not Supported 00:07:57.400 PLE Aggregate Log Change Notices: Not Supported 00:07:57.400 LBA Status Info Alert Notices: Not Supported 00:07:57.400 EGE Aggregate Log Change Notices: Not Supported 00:07:57.400 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.400 Zone Descriptor Change Notices: Not Supported 00:07:57.400 Discovery Log Change Notices: Not Supported 00:07:57.400 Controller Attributes 00:07:57.400 128-bit Host Identifier: Not Supported 00:07:57.400 Non-Operational Permissive Mode: Not Supported 00:07:57.400 NVM Sets: Not Supported 00:07:57.400 Read Recovery Levels: Not Supported 00:07:57.400 Endurance Groups: Not Supported 00:07:57.401 Predictable Latency Mode: Not Supported 00:07:57.401 Traffic Based Keep ALive: Not Supported 00:07:57.401 Namespace Granularity: Not Supported 00:07:57.401 SQ Associations: Not Supported 00:07:57.401 UUID List: Not Supported 00:07:57.401 Multi-Domain Subsystem: Not Supported 00:07:57.401 Fixed Capacity Management: Not Supported 00:07:57.401 Variable Capacity Management: Not Supported 00:07:57.401 Delete Endurance Group: Not Supported 00:07:57.401 Delete NVM Set: Not Supported 00:07:57.401 Extended LBA Formats Supported: Supported 00:07:57.401 Flexible Data Placement Supported: Not Supported 00:07:57.401 00:07:57.401 Controller Memory Buffer Support 00:07:57.401 ================================ 00:07:57.401 Supported: No 00:07:57.401 00:07:57.401 Persistent Memory Region Support 00:07:57.401 ================================ 00:07:57.401 Supported: No 00:07:57.401 00:07:57.401 Admin Command Set Attributes 00:07:57.401 ============================ 00:07:57.401 Security Send/Receive: Not Supported 00:07:57.401 Format NVM: Supported 00:07:57.401 Firmware Activate/Download: Not Supported 00:07:57.401 Namespace Management: Supported 00:07:57.401 Device Self-Test: Not Supported 00:07:57.401 Directives: Supported 00:07:57.401 NVMe-MI: Not Supported 00:07:57.401 Virtualization Management: Not Supported 00:07:57.401 Doorbell Buffer Config: Supported 00:07:57.401 Get LBA Status Capability: Not Supported 00:07:57.401 Command & Feature Lockdown Capability: Not Supported 00:07:57.401 Abort Command Limit: 4 00:07:57.401 Async Event Request Limit: 4 00:07:57.401 Number of Firmware Slots: N/A 00:07:57.401 Firmware Slot 1 Read-Only: N/A 00:07:57.401 Firmware Activation Without Reset: N/A 00:07:57.401 Multiple Update Detection Support: N/A 00:07:57.401 Firmware Update Granularity: No Information Provided 00:07:57.401 Per-Namespace SMART Log: Yes 00:07:57.401 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.401 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:57.401 Command Effects Log Page: Supported 00:07:57.401 Get Log Page Extended Data: Supported 00:07:57.401 Telemetry Log Pages: Not Supported 00:07:57.401 Persistent Event Log Pages: Not Supported 00:07:57.401 Supported Log Pages Log Page: May Support 00:07:57.401 Commands Supported & Effects Log Page: Not Supported 00:07:57.401 Feature Identifiers & Effects Log Page:May Support 00:07:57.401 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.401 Data Area 4 for Telemetry Log: Not Supported 00:07:57.401 Error Log Page Entries Supported: 1 00:07:57.401 Keep Alive: Not Supported 00:07:57.401 00:07:57.401 NVM Command Set Attributes 00:07:57.401 ========================== 00:07:57.401 Submission Queue Entry Size 00:07:57.401 Max: 64 00:07:57.401 Min: 64 00:07:57.401 Completion Queue Entry Size 00:07:57.401 Max: 16 00:07:57.401 Min: 16 00:07:57.401 Number of Namespaces: 256 00:07:57.401 Compare Command: Supported 00:07:57.401 Write Uncorrectable Command: Not Supported 00:07:57.401 Dataset Management Command: Supported 00:07:57.401 Write Zeroes Command: Supported 00:07:57.401 Set Features Save Field: Supported 00:07:57.401 Reservations: Not Supported 00:07:57.401 Timestamp: Supported 00:07:57.401 Copy: Supported 00:07:57.401 Volatile Write Cache: Present 00:07:57.401 Atomic Write Unit (Normal): 1 00:07:57.401 Atomic Write Unit (PFail): 1 00:07:57.401 Atomic Compare & Write Unit: 1 00:07:57.401 Fused Compare & Write: Not Supported 00:07:57.401 Scatter-Gather List 00:07:57.401 SGL Command Set: Supported 00:07:57.401 SGL Keyed: Not Supported 00:07:57.401 SGL Bit Bucket Descriptor: Not Supported 00:07:57.401 SGL Metadata Pointer: Not Supported 00:07:57.401 Oversized SGL: Not Supported 00:07:57.401 SGL Metadata Address: Not Supported 00:07:57.401 SGL Offset: Not Supported 00:07:57.401 Transport SGL Data Block: Not Supported 00:07:57.401 Replay Protected Memory Block: Not Supported 00:07:57.401 00:07:57.401 Firmware Slot Information 00:07:57.401 ========================= 00:07:57.401 Active slot: 1 00:07:57.401 Slot 1 Firmware Revision: 1.0 00:07:57.401 00:07:57.401 00:07:57.401 Commands Supported and Effects 00:07:57.401 ============================== 00:07:57.401 Admin Commands 00:07:57.401 -------------- 00:07:57.401 Delete I/O Submission Queue (00h): Supported 00:07:57.401 Create I/O Submission Queue (01h): Supported 00:07:57.401 Get Log Page (02h): Supported 00:07:57.401 Delete I/O Completion Queue (04h): Supported 00:07:57.401 Create I/O Completion Queue (05h): Supported 00:07:57.401 Identify (06h): Supported 00:07:57.401 Abort (08h): Supported 00:07:57.401 Set Features (09h): Supported 00:07:57.401 Get Features (0Ah): Supported 00:07:57.401 Asynchronous Event Request (0Ch): Supported 00:07:57.401 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.401 Directive Send (19h): Supported 00:07:57.401 Directive Receive (1Ah): Supported 00:07:57.401 Virtualization Management (1Ch): Supported 00:07:57.401 Doorbell Buffer Config (7Ch): Supported 00:07:57.401 Format NVM (80h): Supported LBA-Change 00:07:57.401 I/O Commands 00:07:57.401 ------------ 00:07:57.401 Flush (00h): Supported LBA-Change 00:07:57.401 Write (01h): Supported LBA-Change 00:07:57.401 Read (02h): Supported 00:07:57.401 Compare (05h): Supported 00:07:57.401 Write Zeroes (08h): Supported LBA-Change 00:07:57.401 Dataset Management (09h): Supported LBA-Change 00:07:57.401 Unknown (0Ch): Supported 00:07:57.401 Unknown (12h): Supported 00:07:57.401 Copy (19h): Supported LBA-Change 00:07:57.401 Unknown (1Dh): Supported LBA-Change 00:07:57.401 00:07:57.401 Error Log 00:07:57.401 ========= 00:07:57.401 00:07:57.401 Arbitration 00:07:57.401 =========== 00:07:57.401 Arbitration Burst: no limit 00:07:57.401 00:07:57.401 Power Management 00:07:57.401 ================ 00:07:57.401 Number of Power States: 1 00:07:57.401 Current Power State: Power State #0 00:07:57.401 Power State #0: 00:07:57.401 Max Power: 25.00 W 00:07:57.401 Non-Operational State: Operational 00:07:57.401 Entry Latency: 16 microseconds 00:07:57.401 Exit Latency: 4 microseconds 00:07:57.401 Relative Read Throughput: 0 00:07:57.401 Relative Read Latency: 0 00:07:57.401 Relative Write Throughput: 0 00:07:57.401 Relative Write Latency: 0 00:07:57.401 Idle Power: Not Reported 00:07:57.401 Active Power: Not Reported 00:07:57.401 Non-Operational Permissive Mode: Not Supported 00:07:57.401 00:07:57.401 Health Information 00:07:57.401 ================== 00:07:57.401 Critical Warnings: 00:07:57.401 Available Spare Space: OK 00:07:57.401 Temperature: OK 00:07:57.401 Device Reliability: OK 00:07:57.401 Read Only: No 00:07:57.401 Volatile Memory Backup: OK 00:07:57.401 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.401 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.401 Available Spare: 0% 00:07:57.401 Available Spare Threshold: 0% 00:07:57.401 Life Percentage Used: 0% 00:07:57.401 Data Units Read: 2159 00:07:57.401 Data Units Written: 1946 00:07:57.401 Host Read Commands: 100205 00:07:57.401 Host Write Commands: 98474 00:07:57.401 Controller Busy Time: 0 minutes 00:07:57.401 Power Cycles: 0 00:07:57.401 Power On Hours: 0 hours 00:07:57.401 Unsafe Shutdowns: 0 00:07:57.401 Unrecoverable Media Errors: 0 00:07:57.401 Lifetime Error Log Entries: 0 00:07:57.401 Warning Temperature Time: 0 minutes 00:07:57.401 Critical Temperature Time: 0 minutes 00:07:57.401 00:07:57.401 Number of Queues 00:07:57.401 ================ 00:07:57.401 Number of I/O Submission Queues: 64 00:07:57.401 Number of I/O Completion Queues: 64 00:07:57.401 00:07:57.401 ZNS Specific Controller Data 00:07:57.401 ============================ 00:07:57.401 Zone Append Size Limit: 0 00:07:57.401 00:07:57.401 00:07:57.401 Active Namespaces 00:07:57.401 ================= 00:07:57.401 Namespace ID:1 00:07:57.401 Error Recovery Timeout: Unlimited 00:07:57.401 Command Set Identifier: NVM (00h) 00:07:57.401 Deallocate: Supported 00:07:57.401 Deallocated/Unwritten Error: Supported 00:07:57.401 Deallocated Read Value: All 0x00 00:07:57.401 Deallocate in Write Zeroes: Not Supported 00:07:57.401 Deallocated Guard Field: 0xFFFF 00:07:57.401 Flush: Supported 00:07:57.402 Reservation: Not Supported 00:07:57.402 Namespace Sharing Capabilities: Private 00:07:57.402 Size (in LBAs): 1048576 (4GiB) 00:07:57.402 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.402 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.402 Thin Provisioning: Not Supported 00:07:57.402 Per-NS Atomic Units: No 00:07:57.402 Maximum Single Source Range Length: 128 00:07:57.402 Maximum Copy Length: 128 00:07:57.402 Maximum Source Range Count: 128 00:07:57.402 NGUID/EUI64 Never Reused: No 00:07:57.402 Namespace Write Protected: No 00:07:57.402 Number of LBA Formats: 8 00:07:57.402 Current LBA Format: LBA Format #04 00:07:57.402 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.402 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.402 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.402 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.402 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.402 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.402 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.402 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.402 00:07:57.402 NVM Specific Namespace Data 00:07:57.402 =========================== 00:07:57.402 Logical Block Storage Tag Mask: 0 00:07:57.402 Protection Information Capabilities: 00:07:57.402 16b Guard Protection Information Storage Tag Support: No 00:07:57.402 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.402 Storage Tag Check Read Support: No 00:07:57.402 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Namespace ID:2 00:07:57.402 Error Recovery Timeout: Unlimited 00:07:57.402 Command Set Identifier: NVM (00h) 00:07:57.402 Deallocate: Supported 00:07:57.402 Deallocated/Unwritten Error: Supported 00:07:57.402 Deallocated Read Value: All 0x00 00:07:57.402 Deallocate in Write Zeroes: Not Supported 00:07:57.402 Deallocated Guard Field: 0xFFFF 00:07:57.402 Flush: Supported 00:07:57.402 Reservation: Not Supported 00:07:57.402 Namespace Sharing Capabilities: Private 00:07:57.402 Size (in LBAs): 1048576 (4GiB) 00:07:57.402 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.402 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.402 Thin Provisioning: Not Supported 00:07:57.402 Per-NS Atomic Units: No 00:07:57.402 Maximum Single Source Range Length: 128 00:07:57.402 Maximum Copy Length: 128 00:07:57.402 Maximum Source Range Count: 128 00:07:57.402 NGUID/EUI64 Never Reused: No 00:07:57.402 Namespace Write Protected: No 00:07:57.402 Number of LBA Formats: 8 00:07:57.402 Current LBA Format: LBA Format #04 00:07:57.402 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.402 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.402 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.402 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.402 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.402 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.402 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.402 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.402 00:07:57.402 NVM Specific Namespace Data 00:07:57.402 =========================== 00:07:57.402 Logical Block Storage Tag Mask: 0 00:07:57.402 Protection Information Capabilities: 00:07:57.402 16b Guard Protection Information Storage Tag Support: No 00:07:57.402 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.402 Storage Tag Check Read Support: No 00:07:57.402 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.402 Namespace ID:3 00:07:57.402 Error Recovery Timeout: Unlimited 00:07:57.402 Command Set Identifier: NVM (00h) 00:07:57.402 Deallocate: Supported 00:07:57.402 Deallocated/Unwritten Error: Supported 00:07:57.402 Deallocated Read Value: All 0x00 00:07:57.402 Deallocate in Write Zeroes: Not Supported 00:07:57.402 Deallocated Guard Field: 0xFFFF 00:07:57.402 Flush: Supported 00:07:57.402 Reservation: Not Supported 00:07:57.402 Namespace Sharing Capabilities: Private 00:07:57.402 Size (in LBAs): 1048576 (4GiB) 00:07:57.402 Capacity (in LBAs): 1048576 (4GiB) 00:07:57.402 Utilization (in LBAs): 1048576 (4GiB) 00:07:57.402 Thin Provisioning: Not Supported 00:07:57.402 Per-NS Atomic Units: No 00:07:57.402 Maximum Single Source Range Length: 128 00:07:57.402 Maximum Copy Length: 128 00:07:57.402 Maximum Source Range Count: 128 00:07:57.402 NGUID/EUI64 Never Reused: No 00:07:57.402 Namespace Write Protected: No 00:07:57.402 Number of LBA Formats: 8 00:07:57.402 Current LBA Format: LBA Format #04 00:07:57.402 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.402 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.402 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.402 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.402 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.402 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.402 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.402 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.402 00:07:57.402 NVM Specific Namespace Data 00:07:57.402 =========================== 00:07:57.402 Logical Block Storage Tag Mask: 0 00:07:57.402 Protection Information Capabilities: 00:07:57.402 16b Guard Protection Information Storage Tag Support: No 00:07:57.402 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.661 Storage Tag Check Read Support: No 00:07:57.661 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.661 08:08:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:57.661 08:08:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:57.922 ===================================================== 00:07:57.922 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:57.922 ===================================================== 00:07:57.922 Controller Capabilities/Features 00:07:57.922 ================================ 00:07:57.922 Vendor ID: 1b36 00:07:57.922 Subsystem Vendor ID: 1af4 00:07:57.922 Serial Number: 12343 00:07:57.922 Model Number: QEMU NVMe Ctrl 00:07:57.922 Firmware Version: 8.0.0 00:07:57.922 Recommended Arb Burst: 6 00:07:57.922 IEEE OUI Identifier: 00 54 52 00:07:57.922 Multi-path I/O 00:07:57.922 May have multiple subsystem ports: No 00:07:57.922 May have multiple controllers: Yes 00:07:57.922 Associated with SR-IOV VF: No 00:07:57.922 Max Data Transfer Size: 524288 00:07:57.922 Max Number of Namespaces: 256 00:07:57.922 Max Number of I/O Queues: 64 00:07:57.922 NVMe Specification Version (VS): 1.4 00:07:57.922 NVMe Specification Version (Identify): 1.4 00:07:57.922 Maximum Queue Entries: 2048 00:07:57.922 Contiguous Queues Required: Yes 00:07:57.922 Arbitration Mechanisms Supported 00:07:57.922 Weighted Round Robin: Not Supported 00:07:57.922 Vendor Specific: Not Supported 00:07:57.922 Reset Timeout: 7500 ms 00:07:57.922 Doorbell Stride: 4 bytes 00:07:57.922 NVM Subsystem Reset: Not Supported 00:07:57.922 Command Sets Supported 00:07:57.922 NVM Command Set: Supported 00:07:57.922 Boot Partition: Not Supported 00:07:57.922 Memory Page Size Minimum: 4096 bytes 00:07:57.922 Memory Page Size Maximum: 65536 bytes 00:07:57.922 Persistent Memory Region: Not Supported 00:07:57.922 Optional Asynchronous Events Supported 00:07:57.922 Namespace Attribute Notices: Supported 00:07:57.922 Firmware Activation Notices: Not Supported 00:07:57.922 ANA Change Notices: Not Supported 00:07:57.922 PLE Aggregate Log Change Notices: Not Supported 00:07:57.922 LBA Status Info Alert Notices: Not Supported 00:07:57.922 EGE Aggregate Log Change Notices: Not Supported 00:07:57.922 Normal NVM Subsystem Shutdown event: Not Supported 00:07:57.922 Zone Descriptor Change Notices: Not Supported 00:07:57.922 Discovery Log Change Notices: Not Supported 00:07:57.922 Controller Attributes 00:07:57.922 128-bit Host Identifier: Not Supported 00:07:57.922 Non-Operational Permissive Mode: Not Supported 00:07:57.922 NVM Sets: Not Supported 00:07:57.922 Read Recovery Levels: Not Supported 00:07:57.922 Endurance Groups: Supported 00:07:57.922 Predictable Latency Mode: Not Supported 00:07:57.922 Traffic Based Keep ALive: Not Supported 00:07:57.922 Namespace Granularity: Not Supported 00:07:57.922 SQ Associations: Not Supported 00:07:57.922 UUID List: Not Supported 00:07:57.922 Multi-Domain Subsystem: Not Supported 00:07:57.922 Fixed Capacity Management: Not Supported 00:07:57.922 Variable Capacity Management: Not Supported 00:07:57.922 Delete Endurance Group: Not Supported 00:07:57.922 Delete NVM Set: Not Supported 00:07:57.922 Extended LBA Formats Supported: Supported 00:07:57.922 Flexible Data Placement Supported: Supported 00:07:57.922 00:07:57.922 Controller Memory Buffer Support 00:07:57.922 ================================ 00:07:57.922 Supported: No 00:07:57.922 00:07:57.922 Persistent Memory Region Support 00:07:57.922 ================================ 00:07:57.922 Supported: No 00:07:57.922 00:07:57.922 Admin Command Set Attributes 00:07:57.922 ============================ 00:07:57.922 Security Send/Receive: Not Supported 00:07:57.922 Format NVM: Supported 00:07:57.922 Firmware Activate/Download: Not Supported 00:07:57.922 Namespace Management: Supported 00:07:57.922 Device Self-Test: Not Supported 00:07:57.922 Directives: Supported 00:07:57.922 NVMe-MI: Not Supported 00:07:57.922 Virtualization Management: Not Supported 00:07:57.922 Doorbell Buffer Config: Supported 00:07:57.922 Get LBA Status Capability: Not Supported 00:07:57.922 Command & Feature Lockdown Capability: Not Supported 00:07:57.922 Abort Command Limit: 4 00:07:57.922 Async Event Request Limit: 4 00:07:57.922 Number of Firmware Slots: N/A 00:07:57.922 Firmware Slot 1 Read-Only: N/A 00:07:57.922 Firmware Activation Without Reset: N/A 00:07:57.922 Multiple Update Detection Support: N/A 00:07:57.922 Firmware Update Granularity: No Information Provided 00:07:57.922 Per-Namespace SMART Log: Yes 00:07:57.922 Asymmetric Namespace Access Log Page: Not Supported 00:07:57.922 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:57.922 Command Effects Log Page: Supported 00:07:57.922 Get Log Page Extended Data: Supported 00:07:57.922 Telemetry Log Pages: Not Supported 00:07:57.922 Persistent Event Log Pages: Not Supported 00:07:57.922 Supported Log Pages Log Page: May Support 00:07:57.922 Commands Supported & Effects Log Page: Not Supported 00:07:57.922 Feature Identifiers & Effects Log Page:May Support 00:07:57.922 NVMe-MI Commands & Effects Log Page: May Support 00:07:57.922 Data Area 4 for Telemetry Log: Not Supported 00:07:57.922 Error Log Page Entries Supported: 1 00:07:57.922 Keep Alive: Not Supported 00:07:57.922 00:07:57.922 NVM Command Set Attributes 00:07:57.922 ========================== 00:07:57.922 Submission Queue Entry Size 00:07:57.922 Max: 64 00:07:57.922 Min: 64 00:07:57.922 Completion Queue Entry Size 00:07:57.922 Max: 16 00:07:57.922 Min: 16 00:07:57.922 Number of Namespaces: 256 00:07:57.922 Compare Command: Supported 00:07:57.922 Write Uncorrectable Command: Not Supported 00:07:57.922 Dataset Management Command: Supported 00:07:57.922 Write Zeroes Command: Supported 00:07:57.922 Set Features Save Field: Supported 00:07:57.922 Reservations: Not Supported 00:07:57.922 Timestamp: Supported 00:07:57.922 Copy: Supported 00:07:57.922 Volatile Write Cache: Present 00:07:57.922 Atomic Write Unit (Normal): 1 00:07:57.922 Atomic Write Unit (PFail): 1 00:07:57.922 Atomic Compare & Write Unit: 1 00:07:57.922 Fused Compare & Write: Not Supported 00:07:57.922 Scatter-Gather List 00:07:57.922 SGL Command Set: Supported 00:07:57.922 SGL Keyed: Not Supported 00:07:57.922 SGL Bit Bucket Descriptor: Not Supported 00:07:57.922 SGL Metadata Pointer: Not Supported 00:07:57.922 Oversized SGL: Not Supported 00:07:57.922 SGL Metadata Address: Not Supported 00:07:57.922 SGL Offset: Not Supported 00:07:57.922 Transport SGL Data Block: Not Supported 00:07:57.922 Replay Protected Memory Block: Not Supported 00:07:57.922 00:07:57.922 Firmware Slot Information 00:07:57.922 ========================= 00:07:57.922 Active slot: 1 00:07:57.922 Slot 1 Firmware Revision: 1.0 00:07:57.922 00:07:57.922 00:07:57.922 Commands Supported and Effects 00:07:57.922 ============================== 00:07:57.922 Admin Commands 00:07:57.922 -------------- 00:07:57.922 Delete I/O Submission Queue (00h): Supported 00:07:57.922 Create I/O Submission Queue (01h): Supported 00:07:57.922 Get Log Page (02h): Supported 00:07:57.922 Delete I/O Completion Queue (04h): Supported 00:07:57.922 Create I/O Completion Queue (05h): Supported 00:07:57.922 Identify (06h): Supported 00:07:57.922 Abort (08h): Supported 00:07:57.922 Set Features (09h): Supported 00:07:57.922 Get Features (0Ah): Supported 00:07:57.922 Asynchronous Event Request (0Ch): Supported 00:07:57.922 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:57.922 Directive Send (19h): Supported 00:07:57.922 Directive Receive (1Ah): Supported 00:07:57.922 Virtualization Management (1Ch): Supported 00:07:57.922 Doorbell Buffer Config (7Ch): Supported 00:07:57.922 Format NVM (80h): Supported LBA-Change 00:07:57.922 I/O Commands 00:07:57.922 ------------ 00:07:57.922 Flush (00h): Supported LBA-Change 00:07:57.922 Write (01h): Supported LBA-Change 00:07:57.922 Read (02h): Supported 00:07:57.922 Compare (05h): Supported 00:07:57.922 Write Zeroes (08h): Supported LBA-Change 00:07:57.922 Dataset Management (09h): Supported LBA-Change 00:07:57.922 Unknown (0Ch): Supported 00:07:57.922 Unknown (12h): Supported 00:07:57.922 Copy (19h): Supported LBA-Change 00:07:57.922 Unknown (1Dh): Supported LBA-Change 00:07:57.922 00:07:57.922 Error Log 00:07:57.922 ========= 00:07:57.922 00:07:57.922 Arbitration 00:07:57.922 =========== 00:07:57.922 Arbitration Burst: no limit 00:07:57.922 00:07:57.922 Power Management 00:07:57.922 ================ 00:07:57.922 Number of Power States: 1 00:07:57.922 Current Power State: Power State #0 00:07:57.922 Power State #0: 00:07:57.922 Max Power: 25.00 W 00:07:57.922 Non-Operational State: Operational 00:07:57.922 Entry Latency: 16 microseconds 00:07:57.922 Exit Latency: 4 microseconds 00:07:57.923 Relative Read Throughput: 0 00:07:57.923 Relative Read Latency: 0 00:07:57.923 Relative Write Throughput: 0 00:07:57.923 Relative Write Latency: 0 00:07:57.923 Idle Power: Not Reported 00:07:57.923 Active Power: Not Reported 00:07:57.923 Non-Operational Permissive Mode: Not Supported 00:07:57.923 00:07:57.923 Health Information 00:07:57.923 ================== 00:07:57.923 Critical Warnings: 00:07:57.923 Available Spare Space: OK 00:07:57.923 Temperature: OK 00:07:57.923 Device Reliability: OK 00:07:57.923 Read Only: No 00:07:57.923 Volatile Memory Backup: OK 00:07:57.923 Current Temperature: 323 Kelvin (50 Celsius) 00:07:57.923 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:57.923 Available Spare: 0% 00:07:57.923 Available Spare Threshold: 0% 00:07:57.923 Life Percentage Used: 0% 00:07:57.923 Data Units Read: 778 00:07:57.923 Data Units Written: 707 00:07:57.923 Host Read Commands: 33876 00:07:57.923 Host Write Commands: 33299 00:07:57.923 Controller Busy Time: 0 minutes 00:07:57.923 Power Cycles: 0 00:07:57.923 Power On Hours: 0 hours 00:07:57.923 Unsafe Shutdowns: 0 00:07:57.923 Unrecoverable Media Errors: 0 00:07:57.923 Lifetime Error Log Entries: 0 00:07:57.923 Warning Temperature Time: 0 minutes 00:07:57.923 Critical Temperature Time: 0 minutes 00:07:57.923 00:07:57.923 Number of Queues 00:07:57.923 ================ 00:07:57.923 Number of I/O Submission Queues: 64 00:07:57.923 Number of I/O Completion Queues: 64 00:07:57.923 00:07:57.923 ZNS Specific Controller Data 00:07:57.923 ============================ 00:07:57.923 Zone Append Size Limit: 0 00:07:57.923 00:07:57.923 00:07:57.923 Active Namespaces 00:07:57.923 ================= 00:07:57.923 Namespace ID:1 00:07:57.923 Error Recovery Timeout: Unlimited 00:07:57.923 Command Set Identifier: NVM (00h) 00:07:57.923 Deallocate: Supported 00:07:57.923 Deallocated/Unwritten Error: Supported 00:07:57.923 Deallocated Read Value: All 0x00 00:07:57.923 Deallocate in Write Zeroes: Not Supported 00:07:57.923 Deallocated Guard Field: 0xFFFF 00:07:57.923 Flush: Supported 00:07:57.923 Reservation: Not Supported 00:07:57.923 Namespace Sharing Capabilities: Multiple Controllers 00:07:57.923 Size (in LBAs): 262144 (1GiB) 00:07:57.923 Capacity (in LBAs): 262144 (1GiB) 00:07:57.923 Utilization (in LBAs): 262144 (1GiB) 00:07:57.923 Thin Provisioning: Not Supported 00:07:57.923 Per-NS Atomic Units: No 00:07:57.923 Maximum Single Source Range Length: 128 00:07:57.923 Maximum Copy Length: 128 00:07:57.923 Maximum Source Range Count: 128 00:07:57.923 NGUID/EUI64 Never Reused: No 00:07:57.923 Namespace Write Protected: No 00:07:57.923 Endurance group ID: 1 00:07:57.923 Number of LBA Formats: 8 00:07:57.923 Current LBA Format: LBA Format #04 00:07:57.923 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:57.923 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:57.923 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:57.923 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:57.923 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:57.923 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:57.923 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:57.923 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:57.923 00:07:57.923 Get Feature FDP: 00:07:57.923 ================ 00:07:57.923 Enabled: Yes 00:07:57.923 FDP configuration index: 0 00:07:57.923 00:07:57.923 FDP configurations log page 00:07:57.923 =========================== 00:07:57.923 Number of FDP configurations: 1 00:07:57.923 Version: 0 00:07:57.923 Size: 112 00:07:57.923 FDP Configuration Descriptor: 0 00:07:57.923 Descriptor Size: 96 00:07:57.923 Reclaim Group Identifier format: 2 00:07:57.923 FDP Volatile Write Cache: Not Present 00:07:57.923 FDP Configuration: Valid 00:07:57.923 Vendor Specific Size: 0 00:07:57.923 Number of Reclaim Groups: 2 00:07:57.923 Number of Recalim Unit Handles: 8 00:07:57.923 Max Placement Identifiers: 128 00:07:57.923 Number of Namespaces Suppprted: 256 00:07:57.923 Reclaim unit Nominal Size: 6000000 bytes 00:07:57.923 Estimated Reclaim Unit Time Limit: Not Reported 00:07:57.923 RUH Desc #000: RUH Type: Initially Isolated 00:07:57.923 RUH Desc #001: RUH Type: Initially Isolated 00:07:57.923 RUH Desc #002: RUH Type: Initially Isolated 00:07:57.923 RUH Desc #003: RUH Type: Initially Isolated 00:07:57.923 RUH Desc #004: RUH Type: Initially Isolated 00:07:57.923 RUH Desc #005: RUH Type: Initially Isolated 00:07:57.923 RUH Desc #006: RUH Type: Initially Isolated 00:07:57.923 RUH Desc #007: RUH Type: Initially Isolated 00:07:57.923 00:07:57.923 FDP reclaim unit handle usage log page 00:07:57.923 ====================================== 00:07:57.923 Number of Reclaim Unit Handles: 8 00:07:57.923 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:57.923 RUH Usage Desc #001: RUH Attributes: Unused 00:07:57.923 RUH Usage Desc #002: RUH Attributes: Unused 00:07:57.923 RUH Usage Desc #003: RUH Attributes: Unused 00:07:57.923 RUH Usage Desc #004: RUH Attributes: Unused 00:07:57.923 RUH Usage Desc #005: RUH Attributes: Unused 00:07:57.923 RUH Usage Desc #006: RUH Attributes: Unused 00:07:57.923 RUH Usage Desc #007: RUH Attributes: Unused 00:07:57.923 00:07:57.923 FDP statistics log page 00:07:57.923 ======================= 00:07:57.923 Host bytes with metadata written: 447782912 00:07:57.923 Media bytes with metadata written: 447848448 00:07:57.923 Media bytes erased: 0 00:07:57.923 00:07:57.923 FDP events log page 00:07:57.923 =================== 00:07:57.923 Number of FDP events: 0 00:07:57.923 00:07:57.923 NVM Specific Namespace Data 00:07:57.923 =========================== 00:07:57.923 Logical Block Storage Tag Mask: 0 00:07:57.923 Protection Information Capabilities: 00:07:57.923 16b Guard Protection Information Storage Tag Support: No 00:07:57.923 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:57.923 Storage Tag Check Read Support: No 00:07:57.923 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:57.923 ************************************ 00:07:57.923 END TEST nvme_identify 00:07:57.923 ************************************ 00:07:57.923 00:07:57.923 real 0m1.693s 00:07:57.923 user 0m0.691s 00:07:57.923 sys 0m0.799s 00:07:57.923 08:08:02 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.923 08:08:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:57.923 08:08:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:57.923 08:08:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.923 08:08:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.923 08:08:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.923 ************************************ 00:07:57.923 START TEST nvme_perf 00:07:57.923 ************************************ 00:07:57.923 08:08:02 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:57.923 08:08:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:59.305 Initializing NVMe Controllers 00:07:59.305 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:59.305 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:59.305 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:59.305 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:59.305 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:59.305 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:59.305 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:59.305 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:59.305 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:59.305 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:59.305 Initialization complete. Launching workers. 00:07:59.305 ======================================================== 00:07:59.305 Latency(us) 00:07:59.305 Device Information : IOPS MiB/s Average min max 00:07:59.305 PCIE (0000:00:10.0) NSID 1 from core 0: 13397.98 157.01 9561.06 8107.46 44351.71 00:07:59.305 PCIE (0000:00:11.0) NSID 1 from core 0: 13397.98 157.01 9537.07 8185.17 42077.20 00:07:59.305 PCIE (0000:00:13.0) NSID 1 from core 0: 13397.98 157.01 9510.74 8048.05 39802.73 00:07:59.305 PCIE (0000:00:12.0) NSID 1 from core 0: 13397.98 157.01 9484.04 8074.28 37151.38 00:07:59.305 PCIE (0000:00:12.0) NSID 2 from core 0: 13397.98 157.01 9457.51 8124.34 34484.05 00:07:59.305 PCIE (0000:00:12.0) NSID 3 from core 0: 13397.98 157.01 9431.17 8166.26 31657.02 00:07:59.305 ======================================================== 00:07:59.305 Total : 80387.90 942.05 9496.93 8048.05 44351.71 00:07:59.305 00:07:59.305 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.305 ================================================================================= 00:07:59.305 1.00000% : 8340.945us 00:07:59.305 10.00000% : 8579.258us 00:07:59.305 25.00000% : 8877.149us 00:07:59.305 50.00000% : 9175.040us 00:07:59.305 75.00000% : 9592.087us 00:07:59.305 90.00000% : 10366.604us 00:07:59.305 95.00000% : 10843.229us 00:07:59.305 98.00000% : 11319.855us 00:07:59.305 99.00000% : 13226.356us 00:07:59.305 99.50000% : 36223.535us 00:07:59.305 99.90000% : 43849.542us 00:07:59.305 99.99000% : 44326.167us 00:07:59.305 99.99900% : 44564.480us 00:07:59.305 99.99990% : 44564.480us 00:07:59.305 99.99999% : 44564.480us 00:07:59.305 00:07:59.305 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.305 ================================================================================= 00:07:59.305 1.00000% : 8400.524us 00:07:59.305 10.00000% : 8698.415us 00:07:59.305 25.00000% : 8877.149us 00:07:59.305 50.00000% : 9175.040us 00:07:59.305 75.00000% : 9532.509us 00:07:59.305 90.00000% : 10366.604us 00:07:59.305 95.00000% : 10783.651us 00:07:59.305 98.00000% : 11200.698us 00:07:59.305 99.00000% : 12570.996us 00:07:59.305 99.50000% : 33840.407us 00:07:59.305 99.90000% : 41466.415us 00:07:59.305 99.99000% : 42181.353us 00:07:59.305 99.99900% : 42181.353us 00:07:59.305 99.99990% : 42181.353us 00:07:59.305 99.99999% : 42181.353us 00:07:59.305 00:07:59.305 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.305 ================================================================================= 00:07:59.305 1.00000% : 8400.524us 00:07:59.305 10.00000% : 8698.415us 00:07:59.305 25.00000% : 8877.149us 00:07:59.305 50.00000% : 9175.040us 00:07:59.305 75.00000% : 9532.509us 00:07:59.305 90.00000% : 10366.604us 00:07:59.305 95.00000% : 10783.651us 00:07:59.305 98.00000% : 11200.698us 00:07:59.305 99.00000% : 12630.575us 00:07:59.305 99.50000% : 31695.593us 00:07:59.305 99.90000% : 39321.600us 00:07:59.305 99.99000% : 39798.225us 00:07:59.305 99.99900% : 40036.538us 00:07:59.305 99.99990% : 40036.538us 00:07:59.305 99.99999% : 40036.538us 00:07:59.305 00:07:59.305 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.305 ================================================================================= 00:07:59.305 1.00000% : 8400.524us 00:07:59.305 10.00000% : 8638.836us 00:07:59.305 25.00000% : 8877.149us 00:07:59.305 50.00000% : 9175.040us 00:07:59.305 75.00000% : 9532.509us 00:07:59.305 90.00000% : 10366.604us 00:07:59.305 95.00000% : 10783.651us 00:07:59.305 98.00000% : 11200.698us 00:07:59.305 99.00000% : 12273.105us 00:07:59.305 99.50000% : 29193.309us 00:07:59.305 99.90000% : 36700.160us 00:07:59.305 99.99000% : 37176.785us 00:07:59.305 99.99900% : 37176.785us 00:07:59.305 99.99990% : 37176.785us 00:07:59.305 99.99999% : 37176.785us 00:07:59.305 00:07:59.305 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.305 ================================================================================= 00:07:59.305 1.00000% : 8400.524us 00:07:59.305 10.00000% : 8638.836us 00:07:59.305 25.00000% : 8877.149us 00:07:59.305 50.00000% : 9175.040us 00:07:59.305 75.00000% : 9532.509us 00:07:59.305 90.00000% : 10366.604us 00:07:59.305 95.00000% : 10783.651us 00:07:59.305 98.00000% : 11200.698us 00:07:59.305 99.00000% : 11915.636us 00:07:59.305 99.50000% : 26691.025us 00:07:59.305 99.90000% : 34078.720us 00:07:59.305 99.99000% : 34555.345us 00:07:59.305 99.99900% : 34555.345us 00:07:59.305 99.99990% : 34555.345us 00:07:59.305 99.99999% : 34555.345us 00:07:59.305 00:07:59.305 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.305 ================================================================================= 00:07:59.305 1.00000% : 8400.524us 00:07:59.305 10.00000% : 8638.836us 00:07:59.305 25.00000% : 8877.149us 00:07:59.305 50.00000% : 9175.040us 00:07:59.305 75.00000% : 9532.509us 00:07:59.306 90.00000% : 10366.604us 00:07:59.306 95.00000% : 10783.651us 00:07:59.306 98.00000% : 11141.120us 00:07:59.306 99.00000% : 11736.902us 00:07:59.306 99.50000% : 24188.742us 00:07:59.306 99.90000% : 31218.967us 00:07:59.306 99.99000% : 31695.593us 00:07:59.306 99.99900% : 31695.593us 00:07:59.306 99.99990% : 31695.593us 00:07:59.306 99.99999% : 31695.593us 00:07:59.306 00:07:59.306 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:59.306 ============================================================================== 00:07:59.306 Range in us Cumulative IO count 00:07:59.306 8102.633 - 8162.211: 0.0670% ( 9) 00:07:59.306 8162.211 - 8221.789: 0.3051% ( 32) 00:07:59.306 8221.789 - 8281.367: 0.8557% ( 74) 00:07:59.306 8281.367 - 8340.945: 1.7039% ( 114) 00:07:59.306 8340.945 - 8400.524: 3.0878% ( 186) 00:07:59.306 8400.524 - 8460.102: 4.9182% ( 246) 00:07:59.306 8460.102 - 8519.680: 7.3810% ( 331) 00:07:59.306 8519.680 - 8579.258: 10.2083% ( 380) 00:07:59.306 8579.258 - 8638.836: 13.4226% ( 432) 00:07:59.306 8638.836 - 8698.415: 16.8824% ( 465) 00:07:59.306 8698.415 - 8757.993: 20.7068% ( 514) 00:07:59.306 8757.993 - 8817.571: 24.7768% ( 547) 00:07:59.306 8817.571 - 8877.149: 28.9732% ( 564) 00:07:59.306 8877.149 - 8936.727: 33.3110% ( 583) 00:07:59.306 8936.727 - 8996.305: 37.6339% ( 581) 00:07:59.306 8996.305 - 9055.884: 42.1205% ( 603) 00:07:59.306 9055.884 - 9115.462: 46.3988% ( 575) 00:07:59.306 9115.462 - 9175.040: 50.8110% ( 593) 00:07:59.306 9175.040 - 9234.618: 55.2083% ( 591) 00:07:59.306 9234.618 - 9294.196: 59.4792% ( 574) 00:07:59.306 9294.196 - 9353.775: 63.5640% ( 549) 00:07:59.306 9353.775 - 9413.353: 67.3512% ( 509) 00:07:59.306 9413.353 - 9472.931: 70.8482% ( 470) 00:07:59.306 9472.931 - 9532.509: 74.0179% ( 426) 00:07:59.306 9532.509 - 9592.087: 76.7783% ( 371) 00:07:59.306 9592.087 - 9651.665: 78.9807% ( 296) 00:07:59.306 9651.665 - 9711.244: 80.9747% ( 268) 00:07:59.306 9711.244 - 9770.822: 82.5521% ( 212) 00:07:59.306 9770.822 - 9830.400: 83.9062% ( 182) 00:07:59.306 9830.400 - 9889.978: 85.0000% ( 147) 00:07:59.306 9889.978 - 9949.556: 85.9077% ( 122) 00:07:59.306 9949.556 - 10009.135: 86.7485% ( 113) 00:07:59.306 10009.135 - 10068.713: 87.5000% ( 101) 00:07:59.306 10068.713 - 10128.291: 88.1101% ( 82) 00:07:59.306 10128.291 - 10187.869: 88.7054% ( 80) 00:07:59.306 10187.869 - 10247.447: 89.2783% ( 77) 00:07:59.306 10247.447 - 10307.025: 89.8661% ( 79) 00:07:59.306 10307.025 - 10366.604: 90.4613% ( 80) 00:07:59.306 10366.604 - 10426.182: 91.0714% ( 82) 00:07:59.306 10426.182 - 10485.760: 91.6518% ( 78) 00:07:59.306 10485.760 - 10545.338: 92.2842% ( 85) 00:07:59.306 10545.338 - 10604.916: 92.8423% ( 75) 00:07:59.306 10604.916 - 10664.495: 93.4673% ( 84) 00:07:59.306 10664.495 - 10724.073: 94.0476% ( 78) 00:07:59.306 10724.073 - 10783.651: 94.6131% ( 76) 00:07:59.306 10783.651 - 10843.229: 95.1711% ( 75) 00:07:59.306 10843.229 - 10902.807: 95.7217% ( 74) 00:07:59.306 10902.807 - 10962.385: 96.1161% ( 53) 00:07:59.306 10962.385 - 11021.964: 96.5104% ( 53) 00:07:59.306 11021.964 - 11081.542: 96.9196% ( 55) 00:07:59.306 11081.542 - 11141.120: 97.2470% ( 44) 00:07:59.306 11141.120 - 11200.698: 97.5670% ( 43) 00:07:59.306 11200.698 - 11260.276: 97.8423% ( 37) 00:07:59.306 11260.276 - 11319.855: 98.0283% ( 25) 00:07:59.306 11319.855 - 11379.433: 98.1845% ( 21) 00:07:59.306 11379.433 - 11439.011: 98.2961% ( 15) 00:07:59.306 11439.011 - 11498.589: 98.4301% ( 18) 00:07:59.306 11498.589 - 11558.167: 98.4896% ( 8) 00:07:59.306 11558.167 - 11617.745: 98.5491% ( 8) 00:07:59.306 11617.745 - 11677.324: 98.5938% ( 6) 00:07:59.306 11677.324 - 11736.902: 98.6384% ( 6) 00:07:59.306 11736.902 - 11796.480: 98.6682% ( 4) 00:07:59.306 11796.480 - 11856.058: 98.6830% ( 2) 00:07:59.306 11856.058 - 11915.636: 98.7054% ( 3) 00:07:59.306 11915.636 - 11975.215: 98.7128% ( 1) 00:07:59.306 11975.215 - 12034.793: 98.7277% ( 2) 00:07:59.306 12034.793 - 12094.371: 98.7426% ( 2) 00:07:59.306 12094.371 - 12153.949: 98.7500% ( 1) 00:07:59.306 12153.949 - 12213.527: 98.7723% ( 3) 00:07:59.306 12213.527 - 12273.105: 98.7872% ( 2) 00:07:59.306 12332.684 - 12392.262: 98.8095% ( 3) 00:07:59.306 12392.262 - 12451.840: 98.8170% ( 1) 00:07:59.306 12451.840 - 12511.418: 98.8318% ( 2) 00:07:59.306 12511.418 - 12570.996: 98.8393% ( 1) 00:07:59.306 12570.996 - 12630.575: 98.8616% ( 3) 00:07:59.306 12630.575 - 12690.153: 98.8765% ( 2) 00:07:59.306 12690.153 - 12749.731: 98.8914% ( 2) 00:07:59.306 12749.731 - 12809.309: 98.9062% ( 2) 00:07:59.306 12809.309 - 12868.887: 98.9211% ( 2) 00:07:59.306 12868.887 - 12928.465: 98.9286% ( 1) 00:07:59.306 12928.465 - 12988.044: 98.9435% ( 2) 00:07:59.306 12988.044 - 13047.622: 98.9583% ( 2) 00:07:59.306 13047.622 - 13107.200: 98.9732% ( 2) 00:07:59.306 13107.200 - 13166.778: 98.9881% ( 2) 00:07:59.306 13166.778 - 13226.356: 99.0030% ( 2) 00:07:59.306 13226.356 - 13285.935: 99.0179% ( 2) 00:07:59.306 13285.935 - 13345.513: 99.0253% ( 1) 00:07:59.306 13345.513 - 13405.091: 99.0327% ( 1) 00:07:59.306 13405.091 - 13464.669: 99.0476% ( 2) 00:07:59.306 32887.156 - 33125.469: 99.0551% ( 1) 00:07:59.306 33125.469 - 33363.782: 99.0923% ( 5) 00:07:59.306 33363.782 - 33602.095: 99.1295% ( 5) 00:07:59.306 33602.095 - 33840.407: 99.1592% ( 4) 00:07:59.306 33840.407 - 34078.720: 99.2039% ( 6) 00:07:59.306 34078.720 - 34317.033: 99.2411% ( 5) 00:07:59.306 34317.033 - 34555.345: 99.2783% ( 5) 00:07:59.306 34555.345 - 34793.658: 99.3080% ( 4) 00:07:59.306 34793.658 - 35031.971: 99.3527% ( 6) 00:07:59.306 35031.971 - 35270.284: 99.3824% ( 4) 00:07:59.306 35270.284 - 35508.596: 99.4196% ( 5) 00:07:59.306 35508.596 - 35746.909: 99.4643% ( 6) 00:07:59.306 35746.909 - 35985.222: 99.4940% ( 4) 00:07:59.306 35985.222 - 36223.535: 99.5238% ( 4) 00:07:59.306 41466.415 - 41704.727: 99.5461% ( 3) 00:07:59.306 41704.727 - 41943.040: 99.5833% ( 5) 00:07:59.306 41943.040 - 42181.353: 99.6280% ( 6) 00:07:59.306 42181.353 - 42419.665: 99.6652% ( 5) 00:07:59.306 42419.665 - 42657.978: 99.7098% ( 6) 00:07:59.306 42657.978 - 42896.291: 99.7470% ( 5) 00:07:59.306 42896.291 - 43134.604: 99.7991% ( 7) 00:07:59.306 43134.604 - 43372.916: 99.8363% ( 5) 00:07:59.306 43372.916 - 43611.229: 99.8884% ( 7) 00:07:59.306 43611.229 - 43849.542: 99.9182% ( 4) 00:07:59.306 43849.542 - 44087.855: 99.9554% ( 5) 00:07:59.306 44087.855 - 44326.167: 99.9926% ( 5) 00:07:59.306 44326.167 - 44564.480: 100.0000% ( 1) 00:07:59.306 00:07:59.306 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:59.306 ============================================================================== 00:07:59.306 Range in us Cumulative IO count 00:07:59.306 8162.211 - 8221.789: 0.0595% ( 8) 00:07:59.306 8221.789 - 8281.367: 0.2307% ( 23) 00:07:59.306 8281.367 - 8340.945: 0.5357% ( 41) 00:07:59.306 8340.945 - 8400.524: 1.1533% ( 83) 00:07:59.306 8400.524 - 8460.102: 2.3958% ( 167) 00:07:59.306 8460.102 - 8519.680: 4.1443% ( 235) 00:07:59.306 8519.680 - 8579.258: 6.5476% ( 323) 00:07:59.306 8579.258 - 8638.836: 9.8289% ( 441) 00:07:59.306 8638.836 - 8698.415: 13.4226% ( 483) 00:07:59.306 8698.415 - 8757.993: 17.4033% ( 535) 00:07:59.306 8757.993 - 8817.571: 21.7039% ( 578) 00:07:59.306 8817.571 - 8877.149: 26.4881% ( 643) 00:07:59.306 8877.149 - 8936.727: 31.4062% ( 661) 00:07:59.306 8936.727 - 8996.305: 36.5402% ( 690) 00:07:59.306 8996.305 - 9055.884: 41.7708% ( 703) 00:07:59.306 9055.884 - 9115.462: 46.9494% ( 696) 00:07:59.306 9115.462 - 9175.040: 52.2321% ( 710) 00:07:59.306 9175.040 - 9234.618: 57.2396% ( 673) 00:07:59.306 9234.618 - 9294.196: 62.1652% ( 662) 00:07:59.306 9294.196 - 9353.775: 66.6146% ( 598) 00:07:59.306 9353.775 - 9413.353: 70.4241% ( 512) 00:07:59.306 9413.353 - 9472.931: 73.7649% ( 449) 00:07:59.306 9472.931 - 9532.509: 76.5848% ( 379) 00:07:59.306 9532.509 - 9592.087: 78.9435% ( 317) 00:07:59.306 9592.087 - 9651.665: 80.9673% ( 272) 00:07:59.306 9651.665 - 9711.244: 82.4851% ( 204) 00:07:59.306 9711.244 - 9770.822: 83.6310% ( 154) 00:07:59.306 9770.822 - 9830.400: 84.5015% ( 117) 00:07:59.306 9830.400 - 9889.978: 85.2232% ( 97) 00:07:59.306 9889.978 - 9949.556: 85.9152% ( 93) 00:07:59.306 9949.556 - 10009.135: 86.5476% ( 85) 00:07:59.306 10009.135 - 10068.713: 87.1429% ( 80) 00:07:59.306 10068.713 - 10128.291: 87.7381% ( 80) 00:07:59.306 10128.291 - 10187.869: 88.3854% ( 87) 00:07:59.306 10187.869 - 10247.447: 89.0625% ( 91) 00:07:59.306 10247.447 - 10307.025: 89.7173% ( 88) 00:07:59.306 10307.025 - 10366.604: 90.4315% ( 96) 00:07:59.306 10366.604 - 10426.182: 91.1682% ( 99) 00:07:59.306 10426.182 - 10485.760: 91.8899% ( 97) 00:07:59.306 10485.760 - 10545.338: 92.6190% ( 98) 00:07:59.306 10545.338 - 10604.916: 93.3333% ( 96) 00:07:59.306 10604.916 - 10664.495: 94.0179% ( 92) 00:07:59.306 10664.495 - 10724.073: 94.7024% ( 92) 00:07:59.306 10724.073 - 10783.651: 95.3051% ( 81) 00:07:59.306 10783.651 - 10843.229: 95.7961% ( 66) 00:07:59.306 10843.229 - 10902.807: 96.3170% ( 70) 00:07:59.306 10902.807 - 10962.385: 96.7708% ( 61) 00:07:59.306 10962.385 - 11021.964: 97.2173% ( 60) 00:07:59.306 11021.964 - 11081.542: 97.5744% ( 48) 00:07:59.306 11081.542 - 11141.120: 97.8943% ( 43) 00:07:59.306 11141.120 - 11200.698: 98.1324% ( 32) 00:07:59.306 11200.698 - 11260.276: 98.3408% ( 28) 00:07:59.306 11260.276 - 11319.855: 98.4821% ( 19) 00:07:59.307 11319.855 - 11379.433: 98.5938% ( 15) 00:07:59.307 11379.433 - 11439.011: 98.6682% ( 10) 00:07:59.307 11439.011 - 11498.589: 98.6979% ( 4) 00:07:59.307 11498.589 - 11558.167: 98.7202% ( 3) 00:07:59.307 11558.167 - 11617.745: 98.7426% ( 3) 00:07:59.307 11617.745 - 11677.324: 98.7574% ( 2) 00:07:59.307 11677.324 - 11736.902: 98.7723% ( 2) 00:07:59.307 11736.902 - 11796.480: 98.7872% ( 2) 00:07:59.307 11796.480 - 11856.058: 98.8095% ( 3) 00:07:59.307 11856.058 - 11915.636: 98.8244% ( 2) 00:07:59.307 11915.636 - 11975.215: 98.8393% ( 2) 00:07:59.307 11975.215 - 12034.793: 98.8616% ( 3) 00:07:59.307 12034.793 - 12094.371: 98.8765% ( 2) 00:07:59.307 12094.371 - 12153.949: 98.8914% ( 2) 00:07:59.307 12153.949 - 12213.527: 98.9137% ( 3) 00:07:59.307 12213.527 - 12273.105: 98.9286% ( 2) 00:07:59.307 12273.105 - 12332.684: 98.9509% ( 3) 00:07:59.307 12332.684 - 12392.262: 98.9658% ( 2) 00:07:59.307 12392.262 - 12451.840: 98.9881% ( 3) 00:07:59.307 12451.840 - 12511.418: 98.9955% ( 1) 00:07:59.307 12511.418 - 12570.996: 99.0104% ( 2) 00:07:59.307 12570.996 - 12630.575: 99.0327% ( 3) 00:07:59.307 12630.575 - 12690.153: 99.0476% ( 2) 00:07:59.307 30980.655 - 31218.967: 99.0699% ( 3) 00:07:59.307 31218.967 - 31457.280: 99.1146% ( 6) 00:07:59.307 31457.280 - 31695.593: 99.1518% ( 5) 00:07:59.307 31695.593 - 31933.905: 99.1964% ( 6) 00:07:59.307 31933.905 - 32172.218: 99.2411% ( 6) 00:07:59.307 32172.218 - 32410.531: 99.2783% ( 5) 00:07:59.307 32410.531 - 32648.844: 99.3229% ( 6) 00:07:59.307 32648.844 - 32887.156: 99.3601% ( 5) 00:07:59.307 32887.156 - 33125.469: 99.4048% ( 6) 00:07:59.307 33125.469 - 33363.782: 99.4345% ( 4) 00:07:59.307 33363.782 - 33602.095: 99.4792% ( 6) 00:07:59.307 33602.095 - 33840.407: 99.5238% ( 6) 00:07:59.307 39083.287 - 39321.600: 99.5312% ( 1) 00:07:59.307 39321.600 - 39559.913: 99.5610% ( 4) 00:07:59.307 39559.913 - 39798.225: 99.6057% ( 6) 00:07:59.307 39798.225 - 40036.538: 99.6429% ( 5) 00:07:59.307 40036.538 - 40274.851: 99.6875% ( 6) 00:07:59.307 40274.851 - 40513.164: 99.7321% ( 6) 00:07:59.307 40513.164 - 40751.476: 99.7768% ( 6) 00:07:59.307 40751.476 - 40989.789: 99.8140% ( 5) 00:07:59.307 40989.789 - 41228.102: 99.8586% ( 6) 00:07:59.307 41228.102 - 41466.415: 99.9033% ( 6) 00:07:59.307 41466.415 - 41704.727: 99.9405% ( 5) 00:07:59.307 41704.727 - 41943.040: 99.9702% ( 4) 00:07:59.307 41943.040 - 42181.353: 100.0000% ( 4) 00:07:59.307 00:07:59.307 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:59.307 ============================================================================== 00:07:59.307 Range in us Cumulative IO count 00:07:59.307 8043.055 - 8102.633: 0.0298% ( 4) 00:07:59.307 8102.633 - 8162.211: 0.0595% ( 4) 00:07:59.307 8162.211 - 8221.789: 0.1116% ( 7) 00:07:59.307 8221.789 - 8281.367: 0.2753% ( 22) 00:07:59.307 8281.367 - 8340.945: 0.7068% ( 58) 00:07:59.307 8340.945 - 8400.524: 1.3467% ( 86) 00:07:59.307 8400.524 - 8460.102: 2.5149% ( 157) 00:07:59.307 8460.102 - 8519.680: 4.4122% ( 255) 00:07:59.307 8519.680 - 8579.258: 6.8080% ( 322) 00:07:59.307 8579.258 - 8638.836: 9.8586% ( 410) 00:07:59.307 8638.836 - 8698.415: 13.3929% ( 475) 00:07:59.307 8698.415 - 8757.993: 17.3884% ( 537) 00:07:59.307 8757.993 - 8817.571: 21.9643% ( 615) 00:07:59.307 8817.571 - 8877.149: 26.7932% ( 649) 00:07:59.307 8877.149 - 8936.727: 31.9866% ( 698) 00:07:59.307 8936.727 - 8996.305: 37.1726% ( 697) 00:07:59.307 8996.305 - 9055.884: 42.3512% ( 696) 00:07:59.307 9055.884 - 9115.462: 47.6339% ( 710) 00:07:59.307 9115.462 - 9175.040: 52.8125% ( 696) 00:07:59.307 9175.040 - 9234.618: 57.7902% ( 669) 00:07:59.307 9234.618 - 9294.196: 62.7083% ( 661) 00:07:59.307 9294.196 - 9353.775: 67.1205% ( 593) 00:07:59.307 9353.775 - 9413.353: 71.0491% ( 528) 00:07:59.307 9413.353 - 9472.931: 74.4345% ( 455) 00:07:59.307 9472.931 - 9532.509: 77.3140% ( 387) 00:07:59.307 9532.509 - 9592.087: 79.5685% ( 303) 00:07:59.307 9592.087 - 9651.665: 81.4583% ( 254) 00:07:59.307 9651.665 - 9711.244: 82.8720% ( 190) 00:07:59.307 9711.244 - 9770.822: 83.9807% ( 149) 00:07:59.307 9770.822 - 9830.400: 84.8810% ( 121) 00:07:59.307 9830.400 - 9889.978: 85.6920% ( 109) 00:07:59.307 9889.978 - 9949.556: 86.2202% ( 71) 00:07:59.307 9949.556 - 10009.135: 86.7411% ( 70) 00:07:59.307 10009.135 - 10068.713: 87.2545% ( 69) 00:07:59.307 10068.713 - 10128.291: 87.8348% ( 78) 00:07:59.307 10128.291 - 10187.869: 88.4301% ( 80) 00:07:59.307 10187.869 - 10247.447: 89.0476% ( 83) 00:07:59.307 10247.447 - 10307.025: 89.7321% ( 92) 00:07:59.307 10307.025 - 10366.604: 90.4167% ( 92) 00:07:59.307 10366.604 - 10426.182: 91.1458% ( 98) 00:07:59.307 10426.182 - 10485.760: 91.8155% ( 90) 00:07:59.307 10485.760 - 10545.338: 92.5223% ( 95) 00:07:59.307 10545.338 - 10604.916: 93.1845% ( 89) 00:07:59.307 10604.916 - 10664.495: 93.8542% ( 90) 00:07:59.307 10664.495 - 10724.073: 94.4792% ( 84) 00:07:59.307 10724.073 - 10783.651: 95.0670% ( 79) 00:07:59.307 10783.651 - 10843.229: 95.5952% ( 71) 00:07:59.307 10843.229 - 10902.807: 96.1012% ( 68) 00:07:59.307 10902.807 - 10962.385: 96.5774% ( 64) 00:07:59.307 10962.385 - 11021.964: 97.0015% ( 57) 00:07:59.307 11021.964 - 11081.542: 97.4405% ( 59) 00:07:59.307 11081.542 - 11141.120: 97.8125% ( 50) 00:07:59.307 11141.120 - 11200.698: 98.1399% ( 44) 00:07:59.307 11200.698 - 11260.276: 98.3557% ( 29) 00:07:59.307 11260.276 - 11319.855: 98.5045% ( 20) 00:07:59.307 11319.855 - 11379.433: 98.6012% ( 13) 00:07:59.307 11379.433 - 11439.011: 98.6682% ( 9) 00:07:59.307 11439.011 - 11498.589: 98.6905% ( 3) 00:07:59.307 11498.589 - 11558.167: 98.7054% ( 2) 00:07:59.307 11558.167 - 11617.745: 98.7202% ( 2) 00:07:59.307 11617.745 - 11677.324: 98.7351% ( 2) 00:07:59.307 11677.324 - 11736.902: 98.7500% ( 2) 00:07:59.307 11736.902 - 11796.480: 98.7649% ( 2) 00:07:59.307 11796.480 - 11856.058: 98.7872% ( 3) 00:07:59.307 11856.058 - 11915.636: 98.8021% ( 2) 00:07:59.307 11915.636 - 11975.215: 98.8244% ( 3) 00:07:59.307 11975.215 - 12034.793: 98.8393% ( 2) 00:07:59.307 12034.793 - 12094.371: 98.8542% ( 2) 00:07:59.307 12094.371 - 12153.949: 98.8765% ( 3) 00:07:59.307 12153.949 - 12213.527: 98.8839% ( 1) 00:07:59.307 12213.527 - 12273.105: 98.9062% ( 3) 00:07:59.307 12273.105 - 12332.684: 98.9211% ( 2) 00:07:59.307 12332.684 - 12392.262: 98.9360% ( 2) 00:07:59.307 12392.262 - 12451.840: 98.9583% ( 3) 00:07:59.307 12451.840 - 12511.418: 98.9732% ( 2) 00:07:59.307 12511.418 - 12570.996: 98.9955% ( 3) 00:07:59.307 12570.996 - 12630.575: 99.0104% ( 2) 00:07:59.307 12630.575 - 12690.153: 99.0327% ( 3) 00:07:59.307 12690.153 - 12749.731: 99.0476% ( 2) 00:07:59.307 28835.840 - 28954.996: 99.0625% ( 2) 00:07:59.307 28954.996 - 29074.153: 99.0848% ( 3) 00:07:59.307 29074.153 - 29193.309: 99.1071% ( 3) 00:07:59.307 29193.309 - 29312.465: 99.1220% ( 2) 00:07:59.307 29312.465 - 29431.622: 99.1443% ( 3) 00:07:59.307 29431.622 - 29550.778: 99.1667% ( 3) 00:07:59.307 29550.778 - 29669.935: 99.1890% ( 3) 00:07:59.307 29669.935 - 29789.091: 99.2113% ( 3) 00:07:59.307 29789.091 - 29908.247: 99.2262% ( 2) 00:07:59.307 29908.247 - 30027.404: 99.2485% ( 3) 00:07:59.307 30027.404 - 30146.560: 99.2634% ( 2) 00:07:59.307 30146.560 - 30265.716: 99.2857% ( 3) 00:07:59.307 30265.716 - 30384.873: 99.3080% ( 3) 00:07:59.307 30384.873 - 30504.029: 99.3304% ( 3) 00:07:59.307 30504.029 - 30742.342: 99.3750% ( 6) 00:07:59.307 30742.342 - 30980.655: 99.4122% ( 5) 00:07:59.307 30980.655 - 31218.967: 99.4568% ( 6) 00:07:59.307 31218.967 - 31457.280: 99.4940% ( 5) 00:07:59.307 31457.280 - 31695.593: 99.5238% ( 4) 00:07:59.307 37176.785 - 37415.098: 99.5536% ( 4) 00:07:59.307 37415.098 - 37653.411: 99.5982% ( 6) 00:07:59.307 37653.411 - 37891.724: 99.6429% ( 6) 00:07:59.307 37891.724 - 38130.036: 99.6801% ( 5) 00:07:59.307 38130.036 - 38368.349: 99.7247% ( 6) 00:07:59.307 38368.349 - 38606.662: 99.7693% ( 6) 00:07:59.307 38606.662 - 38844.975: 99.8140% ( 6) 00:07:59.307 38844.975 - 39083.287: 99.8586% ( 6) 00:07:59.307 39083.287 - 39321.600: 99.9107% ( 7) 00:07:59.307 39321.600 - 39559.913: 99.9479% ( 5) 00:07:59.307 39559.913 - 39798.225: 99.9926% ( 6) 00:07:59.307 39798.225 - 40036.538: 100.0000% ( 1) 00:07:59.307 00:07:59.307 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:59.307 ============================================================================== 00:07:59.307 Range in us Cumulative IO count 00:07:59.307 8043.055 - 8102.633: 0.0149% ( 2) 00:07:59.307 8102.633 - 8162.211: 0.0521% ( 5) 00:07:59.307 8162.211 - 8221.789: 0.1265% ( 10) 00:07:59.307 8221.789 - 8281.367: 0.3199% ( 26) 00:07:59.307 8281.367 - 8340.945: 0.7812% ( 62) 00:07:59.307 8340.945 - 8400.524: 1.4732% ( 93) 00:07:59.307 8400.524 - 8460.102: 2.7455% ( 171) 00:07:59.307 8460.102 - 8519.680: 4.7917% ( 275) 00:07:59.307 8519.680 - 8579.258: 7.2768% ( 334) 00:07:59.307 8579.258 - 8638.836: 10.4985% ( 433) 00:07:59.307 8638.836 - 8698.415: 14.1071% ( 485) 00:07:59.307 8698.415 - 8757.993: 18.0878% ( 535) 00:07:59.307 8757.993 - 8817.571: 22.5670% ( 602) 00:07:59.307 8817.571 - 8877.149: 27.2991% ( 636) 00:07:59.307 8877.149 - 8936.727: 32.2768% ( 669) 00:07:59.307 8936.727 - 8996.305: 37.4182% ( 691) 00:07:59.307 8996.305 - 9055.884: 42.5521% ( 690) 00:07:59.307 9055.884 - 9115.462: 47.6860% ( 690) 00:07:59.307 9115.462 - 9175.040: 52.8125% ( 689) 00:07:59.308 9175.040 - 9234.618: 57.7530% ( 664) 00:07:59.308 9234.618 - 9294.196: 62.3958% ( 624) 00:07:59.308 9294.196 - 9353.775: 66.8601% ( 600) 00:07:59.308 9353.775 - 9413.353: 70.8482% ( 536) 00:07:59.308 9413.353 - 9472.931: 74.1518% ( 444) 00:07:59.308 9472.931 - 9532.509: 76.8155% ( 358) 00:07:59.308 9532.509 - 9592.087: 79.2485% ( 327) 00:07:59.308 9592.087 - 9651.665: 81.2574% ( 270) 00:07:59.308 9651.665 - 9711.244: 82.8497% ( 214) 00:07:59.308 9711.244 - 9770.822: 83.9286% ( 145) 00:07:59.308 9770.822 - 9830.400: 84.7768% ( 114) 00:07:59.308 9830.400 - 9889.978: 85.5729% ( 107) 00:07:59.308 9889.978 - 9949.556: 86.1384% ( 76) 00:07:59.308 9949.556 - 10009.135: 86.6443% ( 68) 00:07:59.308 10009.135 - 10068.713: 87.1726% ( 71) 00:07:59.308 10068.713 - 10128.291: 87.6637% ( 66) 00:07:59.308 10128.291 - 10187.869: 88.2143% ( 74) 00:07:59.308 10187.869 - 10247.447: 88.8467% ( 85) 00:07:59.308 10247.447 - 10307.025: 89.5164% ( 90) 00:07:59.308 10307.025 - 10366.604: 90.2753% ( 102) 00:07:59.308 10366.604 - 10426.182: 91.0491% ( 104) 00:07:59.308 10426.182 - 10485.760: 91.7634% ( 96) 00:07:59.308 10485.760 - 10545.338: 92.4851% ( 97) 00:07:59.308 10545.338 - 10604.916: 93.2068% ( 97) 00:07:59.308 10604.916 - 10664.495: 93.9137% ( 95) 00:07:59.308 10664.495 - 10724.073: 94.5536% ( 86) 00:07:59.308 10724.073 - 10783.651: 95.1786% ( 84) 00:07:59.308 10783.651 - 10843.229: 95.7515% ( 77) 00:07:59.308 10843.229 - 10902.807: 96.2798% ( 71) 00:07:59.308 10902.807 - 10962.385: 96.7485% ( 63) 00:07:59.308 10962.385 - 11021.964: 97.1949% ( 60) 00:07:59.308 11021.964 - 11081.542: 97.5893% ( 53) 00:07:59.308 11081.542 - 11141.120: 97.9018% ( 42) 00:07:59.308 11141.120 - 11200.698: 98.2068% ( 41) 00:07:59.308 11200.698 - 11260.276: 98.4226% ( 29) 00:07:59.308 11260.276 - 11319.855: 98.6235% ( 27) 00:07:59.308 11319.855 - 11379.433: 98.7277% ( 14) 00:07:59.308 11379.433 - 11439.011: 98.7649% ( 5) 00:07:59.308 11439.011 - 11498.589: 98.7946% ( 4) 00:07:59.308 11498.589 - 11558.167: 98.8095% ( 2) 00:07:59.308 11558.167 - 11617.745: 98.8244% ( 2) 00:07:59.308 11617.745 - 11677.324: 98.8467% ( 3) 00:07:59.308 11677.324 - 11736.902: 98.8616% ( 2) 00:07:59.308 11736.902 - 11796.480: 98.8765% ( 2) 00:07:59.308 11796.480 - 11856.058: 98.8988% ( 3) 00:07:59.308 11856.058 - 11915.636: 98.9137% ( 2) 00:07:59.308 11915.636 - 11975.215: 98.9360% ( 3) 00:07:59.308 12034.793 - 12094.371: 98.9583% ( 3) 00:07:59.308 12094.371 - 12153.949: 98.9732% ( 2) 00:07:59.308 12153.949 - 12213.527: 98.9881% ( 2) 00:07:59.308 12213.527 - 12273.105: 99.0104% ( 3) 00:07:59.308 12273.105 - 12332.684: 99.0253% ( 2) 00:07:59.308 12332.684 - 12392.262: 99.0402% ( 2) 00:07:59.308 12392.262 - 12451.840: 99.0476% ( 1) 00:07:59.308 26452.713 - 26571.869: 99.0625% ( 2) 00:07:59.308 26571.869 - 26691.025: 99.0774% ( 2) 00:07:59.308 26691.025 - 26810.182: 99.0997% ( 3) 00:07:59.308 26810.182 - 26929.338: 99.1220% ( 3) 00:07:59.308 26929.338 - 27048.495: 99.1369% ( 2) 00:07:59.308 27048.495 - 27167.651: 99.1592% ( 3) 00:07:59.308 27167.651 - 27286.807: 99.1815% ( 3) 00:07:59.308 27286.807 - 27405.964: 99.2039% ( 3) 00:07:59.308 27405.964 - 27525.120: 99.2262% ( 3) 00:07:59.308 27525.120 - 27644.276: 99.2411% ( 2) 00:07:59.308 27644.276 - 27763.433: 99.2634% ( 3) 00:07:59.308 27763.433 - 27882.589: 99.2857% ( 3) 00:07:59.308 27882.589 - 28001.745: 99.3006% ( 2) 00:07:59.308 28001.745 - 28120.902: 99.3229% ( 3) 00:07:59.308 28120.902 - 28240.058: 99.3452% ( 3) 00:07:59.308 28240.058 - 28359.215: 99.3676% ( 3) 00:07:59.308 28359.215 - 28478.371: 99.3899% ( 3) 00:07:59.308 28478.371 - 28597.527: 99.4048% ( 2) 00:07:59.308 28597.527 - 28716.684: 99.4271% ( 3) 00:07:59.308 28716.684 - 28835.840: 99.4494% ( 3) 00:07:59.308 28835.840 - 28954.996: 99.4643% ( 2) 00:07:59.308 28954.996 - 29074.153: 99.4866% ( 3) 00:07:59.308 29074.153 - 29193.309: 99.5089% ( 3) 00:07:59.308 29193.309 - 29312.465: 99.5238% ( 2) 00:07:59.308 34555.345 - 34793.658: 99.5461% ( 3) 00:07:59.308 34793.658 - 35031.971: 99.5908% ( 6) 00:07:59.308 35031.971 - 35270.284: 99.6429% ( 7) 00:07:59.308 35270.284 - 35508.596: 99.6875% ( 6) 00:07:59.308 35508.596 - 35746.909: 99.7321% ( 6) 00:07:59.308 35746.909 - 35985.222: 99.7768% ( 6) 00:07:59.308 35985.222 - 36223.535: 99.8214% ( 6) 00:07:59.308 36223.535 - 36461.847: 99.8661% ( 6) 00:07:59.308 36461.847 - 36700.160: 99.9107% ( 6) 00:07:59.308 36700.160 - 36938.473: 99.9554% ( 6) 00:07:59.308 36938.473 - 37176.785: 100.0000% ( 6) 00:07:59.308 00:07:59.308 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:59.308 ============================================================================== 00:07:59.308 Range in us Cumulative IO count 00:07:59.308 8102.633 - 8162.211: 0.0223% ( 3) 00:07:59.308 8162.211 - 8221.789: 0.1042% ( 11) 00:07:59.308 8221.789 - 8281.367: 0.3571% ( 34) 00:07:59.308 8281.367 - 8340.945: 0.7738% ( 56) 00:07:59.308 8340.945 - 8400.524: 1.4658% ( 93) 00:07:59.308 8400.524 - 8460.102: 2.8051% ( 180) 00:07:59.308 8460.102 - 8519.680: 4.6801% ( 252) 00:07:59.308 8519.680 - 8579.258: 7.2991% ( 352) 00:07:59.308 8579.258 - 8638.836: 10.2679% ( 399) 00:07:59.308 8638.836 - 8698.415: 13.8467% ( 481) 00:07:59.308 8698.415 - 8757.993: 17.7976% ( 531) 00:07:59.308 8757.993 - 8817.571: 22.1205% ( 581) 00:07:59.308 8817.571 - 8877.149: 26.9643% ( 651) 00:07:59.308 8877.149 - 8936.727: 31.9122% ( 665) 00:07:59.308 8936.727 - 8996.305: 37.0833% ( 695) 00:07:59.308 8996.305 - 9055.884: 42.3289% ( 705) 00:07:59.308 9055.884 - 9115.462: 47.5298% ( 699) 00:07:59.308 9115.462 - 9175.040: 52.7381% ( 700) 00:07:59.308 9175.040 - 9234.618: 57.6637% ( 662) 00:07:59.308 9234.618 - 9294.196: 62.5074% ( 651) 00:07:59.308 9294.196 - 9353.775: 66.9643% ( 599) 00:07:59.308 9353.775 - 9413.353: 70.8557% ( 523) 00:07:59.308 9413.353 - 9472.931: 74.2039% ( 450) 00:07:59.308 9472.931 - 9532.509: 76.9420% ( 368) 00:07:59.308 9532.509 - 9592.087: 79.3527% ( 324) 00:07:59.308 9592.087 - 9651.665: 81.2872% ( 260) 00:07:59.308 9651.665 - 9711.244: 82.8943% ( 216) 00:07:59.308 9711.244 - 9770.822: 84.0327% ( 153) 00:07:59.308 9770.822 - 9830.400: 84.9554% ( 124) 00:07:59.308 9830.400 - 9889.978: 85.7292% ( 104) 00:07:59.308 9889.978 - 9949.556: 86.3914% ( 89) 00:07:59.308 9949.556 - 10009.135: 86.9940% ( 81) 00:07:59.308 10009.135 - 10068.713: 87.5074% ( 69) 00:07:59.308 10068.713 - 10128.291: 88.0283% ( 70) 00:07:59.308 10128.291 - 10187.869: 88.6086% ( 78) 00:07:59.308 10187.869 - 10247.447: 89.2708% ( 89) 00:07:59.308 10247.447 - 10307.025: 89.9405% ( 90) 00:07:59.308 10307.025 - 10366.604: 90.6548% ( 96) 00:07:59.308 10366.604 - 10426.182: 91.3318% ( 91) 00:07:59.308 10426.182 - 10485.760: 92.0685% ( 99) 00:07:59.308 10485.760 - 10545.338: 92.7827% ( 96) 00:07:59.308 10545.338 - 10604.916: 93.4003% ( 83) 00:07:59.308 10604.916 - 10664.495: 94.0997% ( 94) 00:07:59.308 10664.495 - 10724.073: 94.7321% ( 85) 00:07:59.308 10724.073 - 10783.651: 95.3646% ( 85) 00:07:59.308 10783.651 - 10843.229: 95.8780% ( 69) 00:07:59.308 10843.229 - 10902.807: 96.3914% ( 69) 00:07:59.308 10902.807 - 10962.385: 96.8080% ( 56) 00:07:59.308 10962.385 - 11021.964: 97.2396% ( 58) 00:07:59.308 11021.964 - 11081.542: 97.6562% ( 56) 00:07:59.308 11081.542 - 11141.120: 97.9911% ( 45) 00:07:59.308 11141.120 - 11200.698: 98.2812% ( 39) 00:07:59.308 11200.698 - 11260.276: 98.5268% ( 33) 00:07:59.308 11260.276 - 11319.855: 98.6979% ( 23) 00:07:59.308 11319.855 - 11379.433: 98.7946% ( 13) 00:07:59.308 11379.433 - 11439.011: 98.8244% ( 4) 00:07:59.308 11439.011 - 11498.589: 98.8690% ( 6) 00:07:59.308 11498.589 - 11558.167: 98.8988% ( 4) 00:07:59.308 11558.167 - 11617.745: 98.9211% ( 3) 00:07:59.308 11617.745 - 11677.324: 98.9360% ( 2) 00:07:59.308 11677.324 - 11736.902: 98.9583% ( 3) 00:07:59.308 11736.902 - 11796.480: 98.9732% ( 2) 00:07:59.308 11796.480 - 11856.058: 98.9881% ( 2) 00:07:59.308 11856.058 - 11915.636: 99.0104% ( 3) 00:07:59.308 11915.636 - 11975.215: 99.0179% ( 1) 00:07:59.308 11975.215 - 12034.793: 99.0327% ( 2) 00:07:59.308 12034.793 - 12094.371: 99.0476% ( 2) 00:07:59.308 24069.585 - 24188.742: 99.0551% ( 1) 00:07:59.308 24188.742 - 24307.898: 99.0699% ( 2) 00:07:59.308 24307.898 - 24427.055: 99.0848% ( 2) 00:07:59.308 24427.055 - 24546.211: 99.1071% ( 3) 00:07:59.308 24546.211 - 24665.367: 99.1295% ( 3) 00:07:59.308 24665.367 - 24784.524: 99.1443% ( 2) 00:07:59.308 24784.524 - 24903.680: 99.1667% ( 3) 00:07:59.308 24903.680 - 25022.836: 99.1890% ( 3) 00:07:59.308 25022.836 - 25141.993: 99.2113% ( 3) 00:07:59.308 25141.993 - 25261.149: 99.2336% ( 3) 00:07:59.308 25261.149 - 25380.305: 99.2560% ( 3) 00:07:59.308 25380.305 - 25499.462: 99.2783% ( 3) 00:07:59.308 25499.462 - 25618.618: 99.3006% ( 3) 00:07:59.308 25618.618 - 25737.775: 99.3229% ( 3) 00:07:59.308 25737.775 - 25856.931: 99.3452% ( 3) 00:07:59.308 25856.931 - 25976.087: 99.3676% ( 3) 00:07:59.308 25976.087 - 26095.244: 99.3899% ( 3) 00:07:59.308 26095.244 - 26214.400: 99.4122% ( 3) 00:07:59.308 26214.400 - 26333.556: 99.4345% ( 3) 00:07:59.308 26333.556 - 26452.713: 99.4568% ( 3) 00:07:59.308 26452.713 - 26571.869: 99.4792% ( 3) 00:07:59.308 26571.869 - 26691.025: 99.5015% ( 3) 00:07:59.308 26691.025 - 26810.182: 99.5238% ( 3) 00:07:59.308 31933.905 - 32172.218: 99.5461% ( 3) 00:07:59.308 32172.218 - 32410.531: 99.5908% ( 6) 00:07:59.309 32410.531 - 32648.844: 99.6429% ( 7) 00:07:59.309 32648.844 - 32887.156: 99.6801% ( 5) 00:07:59.309 32887.156 - 33125.469: 99.7247% ( 6) 00:07:59.309 33125.469 - 33363.782: 99.7693% ( 6) 00:07:59.309 33363.782 - 33602.095: 99.8140% ( 6) 00:07:59.309 33602.095 - 33840.407: 99.8586% ( 6) 00:07:59.309 33840.407 - 34078.720: 99.9107% ( 7) 00:07:59.309 34078.720 - 34317.033: 99.9628% ( 7) 00:07:59.309 34317.033 - 34555.345: 100.0000% ( 5) 00:07:59.309 00:07:59.309 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:59.309 ============================================================================== 00:07:59.309 Range in us Cumulative IO count 00:07:59.309 8162.211 - 8221.789: 0.0967% ( 13) 00:07:59.309 8221.789 - 8281.367: 0.3199% ( 30) 00:07:59.309 8281.367 - 8340.945: 0.7812% ( 62) 00:07:59.309 8340.945 - 8400.524: 1.5179% ( 99) 00:07:59.309 8400.524 - 8460.102: 2.6786% ( 156) 00:07:59.309 8460.102 - 8519.680: 4.6354% ( 263) 00:07:59.309 8519.680 - 8579.258: 7.2768% ( 355) 00:07:59.309 8579.258 - 8638.836: 10.4390% ( 425) 00:07:59.309 8638.836 - 8698.415: 13.9583% ( 473) 00:07:59.309 8698.415 - 8757.993: 18.0208% ( 546) 00:07:59.309 8757.993 - 8817.571: 22.3958% ( 588) 00:07:59.309 8817.571 - 8877.149: 27.1131% ( 634) 00:07:59.309 8877.149 - 8936.727: 32.1131% ( 672) 00:07:59.309 8936.727 - 8996.305: 37.1726% ( 680) 00:07:59.309 8996.305 - 9055.884: 42.5074% ( 717) 00:07:59.309 9055.884 - 9115.462: 47.7307% ( 702) 00:07:59.309 9115.462 - 9175.040: 52.9092% ( 696) 00:07:59.309 9175.040 - 9234.618: 57.8943% ( 670) 00:07:59.309 9234.618 - 9294.196: 62.7455% ( 652) 00:07:59.309 9294.196 - 9353.775: 67.1057% ( 586) 00:07:59.309 9353.775 - 9413.353: 71.0417% ( 529) 00:07:59.309 9413.353 - 9472.931: 74.2113% ( 426) 00:07:59.309 9472.931 - 9532.509: 76.7783% ( 345) 00:07:59.309 9532.509 - 9592.087: 79.1890% ( 324) 00:07:59.309 9592.087 - 9651.665: 81.1458% ( 263) 00:07:59.309 9651.665 - 9711.244: 82.7381% ( 214) 00:07:59.309 9711.244 - 9770.822: 83.8765% ( 153) 00:07:59.309 9770.822 - 9830.400: 84.7173% ( 113) 00:07:59.309 9830.400 - 9889.978: 85.5432% ( 111) 00:07:59.309 9889.978 - 9949.556: 86.1830% ( 86) 00:07:59.309 9949.556 - 10009.135: 86.7857% ( 81) 00:07:59.309 10009.135 - 10068.713: 87.3363% ( 74) 00:07:59.309 10068.713 - 10128.291: 87.9911% ( 88) 00:07:59.309 10128.291 - 10187.869: 88.5640% ( 77) 00:07:59.309 10187.869 - 10247.447: 89.1741% ( 82) 00:07:59.309 10247.447 - 10307.025: 89.8661% ( 93) 00:07:59.309 10307.025 - 10366.604: 90.5283% ( 89) 00:07:59.309 10366.604 - 10426.182: 91.2723% ( 100) 00:07:59.309 10426.182 - 10485.760: 91.9866% ( 96) 00:07:59.309 10485.760 - 10545.338: 92.7083% ( 97) 00:07:59.309 10545.338 - 10604.916: 93.4301% ( 97) 00:07:59.309 10604.916 - 10664.495: 94.1369% ( 95) 00:07:59.309 10664.495 - 10724.073: 94.7991% ( 89) 00:07:59.309 10724.073 - 10783.651: 95.4539% ( 88) 00:07:59.309 10783.651 - 10843.229: 96.0268% ( 77) 00:07:59.309 10843.229 - 10902.807: 96.5476% ( 70) 00:07:59.309 10902.807 - 10962.385: 97.0164% ( 63) 00:07:59.309 10962.385 - 11021.964: 97.4554% ( 59) 00:07:59.309 11021.964 - 11081.542: 97.7902% ( 45) 00:07:59.309 11081.542 - 11141.120: 98.1250% ( 45) 00:07:59.309 11141.120 - 11200.698: 98.3854% ( 35) 00:07:59.309 11200.698 - 11260.276: 98.5789% ( 26) 00:07:59.309 11260.276 - 11319.855: 98.7202% ( 19) 00:07:59.309 11319.855 - 11379.433: 98.8318% ( 15) 00:07:59.309 11379.433 - 11439.011: 98.8839% ( 7) 00:07:59.309 11439.011 - 11498.589: 98.9286% ( 6) 00:07:59.309 11498.589 - 11558.167: 98.9583% ( 4) 00:07:59.309 11558.167 - 11617.745: 98.9658% ( 1) 00:07:59.309 11617.745 - 11677.324: 98.9881% ( 3) 00:07:59.309 11677.324 - 11736.902: 99.0104% ( 3) 00:07:59.309 11736.902 - 11796.480: 99.0327% ( 3) 00:07:59.309 11796.480 - 11856.058: 99.0476% ( 2) 00:07:59.309 21686.458 - 21805.615: 99.0699% ( 3) 00:07:59.309 21805.615 - 21924.771: 99.0848% ( 2) 00:07:59.309 21924.771 - 22043.927: 99.1071% ( 3) 00:07:59.309 22043.927 - 22163.084: 99.1295% ( 3) 00:07:59.309 22163.084 - 22282.240: 99.1443% ( 2) 00:07:59.309 22282.240 - 22401.396: 99.1667% ( 3) 00:07:59.309 22401.396 - 22520.553: 99.1890% ( 3) 00:07:59.309 22520.553 - 22639.709: 99.2188% ( 4) 00:07:59.309 22639.709 - 22758.865: 99.2411% ( 3) 00:07:59.309 22758.865 - 22878.022: 99.2634% ( 3) 00:07:59.309 22878.022 - 22997.178: 99.2857% ( 3) 00:07:59.309 22997.178 - 23116.335: 99.3006% ( 2) 00:07:59.309 23116.335 - 23235.491: 99.3304% ( 4) 00:07:59.309 23235.491 - 23354.647: 99.3527% ( 3) 00:07:59.309 23354.647 - 23473.804: 99.3750% ( 3) 00:07:59.309 23473.804 - 23592.960: 99.3973% ( 3) 00:07:59.309 23592.960 - 23712.116: 99.4196% ( 3) 00:07:59.309 23712.116 - 23831.273: 99.4420% ( 3) 00:07:59.309 23831.273 - 23950.429: 99.4643% ( 3) 00:07:59.309 23950.429 - 24069.585: 99.4866% ( 3) 00:07:59.309 24069.585 - 24188.742: 99.5164% ( 4) 00:07:59.309 24188.742 - 24307.898: 99.5238% ( 1) 00:07:59.309 29312.465 - 29431.622: 99.5387% ( 2) 00:07:59.309 29431.622 - 29550.778: 99.5610% ( 3) 00:07:59.309 29550.778 - 29669.935: 99.5833% ( 3) 00:07:59.309 29669.935 - 29789.091: 99.6057% ( 3) 00:07:59.309 29789.091 - 29908.247: 99.6354% ( 4) 00:07:59.309 29908.247 - 30027.404: 99.6577% ( 3) 00:07:59.309 30027.404 - 30146.560: 99.6801% ( 3) 00:07:59.309 30146.560 - 30265.716: 99.7024% ( 3) 00:07:59.309 30265.716 - 30384.873: 99.7247% ( 3) 00:07:59.309 30384.873 - 30504.029: 99.7545% ( 4) 00:07:59.309 30504.029 - 30742.342: 99.8065% ( 7) 00:07:59.309 30742.342 - 30980.655: 99.8512% ( 6) 00:07:59.309 30980.655 - 31218.967: 99.9033% ( 7) 00:07:59.309 31218.967 - 31457.280: 99.9554% ( 7) 00:07:59.309 31457.280 - 31695.593: 100.0000% ( 6) 00:07:59.309 00:07:59.309 08:08:04 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:00.688 Initializing NVMe Controllers 00:08:00.688 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.688 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.688 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.688 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.688 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.688 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.688 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.688 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.688 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.688 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.688 Initialization complete. Launching workers. 00:08:00.688 ======================================================== 00:08:00.688 Latency(us) 00:08:00.688 Device Information : IOPS MiB/s Average min max 00:08:00.688 PCIE (0000:00:10.0) NSID 1 from core 0: 12711.00 148.96 10094.61 7936.16 33894.06 00:08:00.688 PCIE (0000:00:11.0) NSID 1 from core 0: 12711.00 148.96 10082.74 8023.64 32204.08 00:08:00.688 PCIE (0000:00:13.0) NSID 1 from core 0: 12711.00 148.96 10069.90 7927.64 31249.94 00:08:00.688 PCIE (0000:00:12.0) NSID 1 from core 0: 12711.00 148.96 10056.99 7946.02 30281.51 00:08:00.688 PCIE (0000:00:12.0) NSID 2 from core 0: 12711.00 148.96 10044.36 7903.57 28905.70 00:08:00.688 PCIE (0000:00:12.0) NSID 3 from core 0: 12711.00 148.96 10031.49 7918.45 27244.93 00:08:00.688 ======================================================== 00:08:00.688 Total : 76265.98 893.74 10063.35 7903.57 33894.06 00:08:00.688 00:08:00.688 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.688 ================================================================================= 00:08:00.688 1.00000% : 8102.633us 00:08:00.688 10.00000% : 8519.680us 00:08:00.688 25.00000% : 9055.884us 00:08:00.688 50.00000% : 10009.135us 00:08:00.688 75.00000% : 10664.495us 00:08:00.688 90.00000% : 11141.120us 00:08:00.688 95.00000% : 11677.324us 00:08:00.688 98.00000% : 12392.262us 00:08:00.688 99.00000% : 24188.742us 00:08:00.688 99.50000% : 31933.905us 00:08:00.688 99.90000% : 33602.095us 00:08:00.688 99.99000% : 33840.407us 00:08:00.688 99.99900% : 34078.720us 00:08:00.688 99.99990% : 34078.720us 00:08:00.688 99.99999% : 34078.720us 00:08:00.688 00:08:00.688 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.688 ================================================================================= 00:08:00.688 1.00000% : 8281.367us 00:08:00.688 10.00000% : 8638.836us 00:08:00.688 25.00000% : 8996.305us 00:08:00.688 50.00000% : 10128.291us 00:08:00.688 75.00000% : 10545.338us 00:08:00.688 90.00000% : 11081.542us 00:08:00.688 95.00000% : 11617.745us 00:08:00.688 98.00000% : 12332.684us 00:08:00.688 99.00000% : 24069.585us 00:08:00.688 99.50000% : 30504.029us 00:08:00.688 99.90000% : 31933.905us 00:08:00.688 99.99000% : 32410.531us 00:08:00.688 99.99900% : 32410.531us 00:08:00.688 99.99990% : 32410.531us 00:08:00.688 99.99999% : 32410.531us 00:08:00.688 00:08:00.688 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.688 ================================================================================= 00:08:00.688 1.00000% : 8221.789us 00:08:00.688 10.00000% : 8579.258us 00:08:00.688 25.00000% : 8996.305us 00:08:00.688 50.00000% : 10128.291us 00:08:00.688 75.00000% : 10545.338us 00:08:00.688 90.00000% : 11081.542us 00:08:00.688 95.00000% : 11558.167us 00:08:00.688 98.00000% : 12332.684us 00:08:00.688 99.00000% : 23116.335us 00:08:00.688 99.50000% : 29550.778us 00:08:00.688 99.90000% : 30980.655us 00:08:00.688 99.99000% : 31457.280us 00:08:00.688 99.99900% : 31457.280us 00:08:00.688 99.99990% : 31457.280us 00:08:00.688 99.99999% : 31457.280us 00:08:00.688 00:08:00.688 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.688 ================================================================================= 00:08:00.688 1.00000% : 8281.367us 00:08:00.688 10.00000% : 8579.258us 00:08:00.688 25.00000% : 8996.305us 00:08:00.688 50.00000% : 10068.713us 00:08:00.688 75.00000% : 10604.916us 00:08:00.688 90.00000% : 11081.542us 00:08:00.688 95.00000% : 11617.745us 00:08:00.688 98.00000% : 12153.949us 00:08:00.688 99.00000% : 21448.145us 00:08:00.688 99.50000% : 28001.745us 00:08:00.688 99.90000% : 29908.247us 00:08:00.688 99.99000% : 30265.716us 00:08:00.688 99.99900% : 30384.873us 00:08:00.688 99.99990% : 30384.873us 00:08:00.688 99.99999% : 30384.873us 00:08:00.688 00:08:00.688 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.688 ================================================================================= 00:08:00.688 1.00000% : 8221.789us 00:08:00.688 10.00000% : 8579.258us 00:08:00.688 25.00000% : 8996.305us 00:08:00.688 50.00000% : 10128.291us 00:08:00.688 75.00000% : 10604.916us 00:08:00.688 90.00000% : 11081.542us 00:08:00.688 95.00000% : 11617.745us 00:08:00.688 98.00000% : 12213.527us 00:08:00.688 99.00000% : 20137.425us 00:08:00.688 99.50000% : 26929.338us 00:08:00.688 99.90000% : 28597.527us 00:08:00.688 99.99000% : 28954.996us 00:08:00.688 99.99900% : 28954.996us 00:08:00.688 99.99990% : 28954.996us 00:08:00.688 99.99999% : 28954.996us 00:08:00.688 00:08:00.688 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.688 ================================================================================= 00:08:00.688 1.00000% : 8281.367us 00:08:00.688 10.00000% : 8638.836us 00:08:00.688 25.00000% : 8996.305us 00:08:00.688 50.00000% : 10128.291us 00:08:00.688 75.00000% : 10545.338us 00:08:00.688 90.00000% : 11081.542us 00:08:00.688 95.00000% : 11677.324us 00:08:00.688 98.00000% : 12332.684us 00:08:00.688 99.00000% : 18707.549us 00:08:00.688 99.50000% : 24427.055us 00:08:00.688 99.90000% : 27048.495us 00:08:00.688 99.99000% : 27286.807us 00:08:00.688 99.99900% : 27286.807us 00:08:00.688 99.99990% : 27286.807us 00:08:00.688 99.99999% : 27286.807us 00:08:00.688 00:08:00.688 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.688 ============================================================================== 00:08:00.688 Range in us Cumulative IO count 00:08:00.688 7923.898 - 7983.476: 0.2198% ( 28) 00:08:00.688 7983.476 - 8043.055: 0.6831% ( 59) 00:08:00.688 8043.055 - 8102.633: 1.1778% ( 63) 00:08:00.688 8102.633 - 8162.211: 1.9080% ( 93) 00:08:00.688 8162.211 - 8221.789: 2.5126% ( 77) 00:08:00.688 8221.789 - 8281.367: 3.7845% ( 162) 00:08:00.688 8281.367 - 8340.945: 5.1115% ( 169) 00:08:00.688 8340.945 - 8400.524: 6.5798% ( 187) 00:08:00.688 8400.524 - 8460.102: 8.5192% ( 247) 00:08:00.688 8460.102 - 8519.680: 10.2858% ( 225) 00:08:00.688 8519.680 - 8579.258: 12.0132% ( 220) 00:08:00.688 8579.258 - 8638.836: 13.6856% ( 213) 00:08:00.688 8638.836 - 8698.415: 15.3266% ( 209) 00:08:00.688 8698.415 - 8757.993: 16.9127% ( 202) 00:08:00.688 8757.993 - 8817.571: 18.5537% ( 209) 00:08:00.688 8817.571 - 8877.149: 20.1790% ( 207) 00:08:00.688 8877.149 - 8936.727: 21.6944% ( 193) 00:08:00.688 8936.727 - 8996.305: 23.4139% ( 219) 00:08:00.688 8996.305 - 9055.884: 25.0314% ( 206) 00:08:00.688 9055.884 - 9115.462: 26.5389% ( 192) 00:08:00.688 9115.462 - 9175.040: 27.9523% ( 180) 00:08:00.688 9175.040 - 9234.618: 29.2164% ( 161) 00:08:00.688 9234.618 - 9294.196: 30.4413% ( 156) 00:08:00.688 9294.196 - 9353.775: 31.5641% ( 143) 00:08:00.688 9353.775 - 9413.353: 32.7811% ( 155) 00:08:00.688 9413.353 - 9472.931: 33.9746% ( 152) 00:08:00.688 9472.931 - 9532.509: 34.8068% ( 106) 00:08:00.688 9532.509 - 9592.087: 35.9768% ( 149) 00:08:00.688 9592.087 - 9651.665: 37.4686% ( 190) 00:08:00.688 9651.665 - 9711.244: 39.2588% ( 228) 00:08:00.688 9711.244 - 9770.822: 41.2060% ( 248) 00:08:00.688 9770.822 - 9830.400: 43.5773% ( 302) 00:08:00.688 9830.400 - 9889.978: 45.7365% ( 275) 00:08:00.688 9889.978 - 9949.556: 47.9114% ( 277) 00:08:00.688 9949.556 - 10009.135: 50.1806% ( 289) 00:08:00.688 10009.135 - 10068.713: 52.7403% ( 326) 00:08:00.688 10068.713 - 10128.291: 55.3156% ( 328) 00:08:00.688 10128.291 - 10187.869: 57.5927% ( 290) 00:08:00.688 10187.869 - 10247.447: 59.9874% ( 305) 00:08:00.688 10247.447 - 10307.025: 62.3037% ( 295) 00:08:00.688 10307.025 - 10366.604: 64.5572% ( 287) 00:08:00.688 10366.604 - 10426.182: 67.0776% ( 321) 00:08:00.689 10426.182 - 10485.760: 69.3075% ( 284) 00:08:00.689 10485.760 - 10545.338: 71.7023% ( 305) 00:08:00.689 10545.338 - 10604.916: 74.1520% ( 312) 00:08:00.689 10604.916 - 10664.495: 76.5075% ( 300) 00:08:00.689 10664.495 - 10724.073: 78.7453% ( 285) 00:08:00.689 10724.073 - 10783.651: 80.8574% ( 269) 00:08:00.689 10783.651 - 10843.229: 83.0009% ( 273) 00:08:00.689 10843.229 - 10902.807: 84.7911% ( 228) 00:08:00.689 10902.807 - 10962.385: 86.5499% ( 224) 00:08:00.689 10962.385 - 11021.964: 87.9868% ( 183) 00:08:00.689 11021.964 - 11081.542: 89.2745% ( 164) 00:08:00.689 11081.542 - 11141.120: 90.3659% ( 139) 00:08:00.689 11141.120 - 11200.698: 91.2296% ( 110) 00:08:00.689 11200.698 - 11260.276: 91.9912% ( 97) 00:08:00.689 11260.276 - 11319.855: 92.5958% ( 77) 00:08:00.689 11319.855 - 11379.433: 93.1925% ( 76) 00:08:00.689 11379.433 - 11439.011: 93.6872% ( 63) 00:08:00.689 11439.011 - 11498.589: 93.9934% ( 39) 00:08:00.689 11498.589 - 11558.167: 94.2839% ( 37) 00:08:00.689 11558.167 - 11617.745: 94.7001% ( 53) 00:08:00.689 11617.745 - 11677.324: 95.0455% ( 44) 00:08:00.689 11677.324 - 11736.902: 95.3282% ( 36) 00:08:00.689 11736.902 - 11796.480: 95.5638% ( 30) 00:08:00.689 11796.480 - 11856.058: 95.8386% ( 35) 00:08:00.689 11856.058 - 11915.636: 96.0898% ( 32) 00:08:00.689 11915.636 - 11975.215: 96.3411% ( 32) 00:08:00.689 11975.215 - 12034.793: 96.5845% ( 31) 00:08:00.689 12034.793 - 12094.371: 96.8279% ( 31) 00:08:00.689 12094.371 - 12153.949: 97.0634% ( 30) 00:08:00.689 12153.949 - 12213.527: 97.3304% ( 34) 00:08:00.689 12213.527 - 12273.105: 97.5581% ( 29) 00:08:00.689 12273.105 - 12332.684: 97.8094% ( 32) 00:08:00.689 12332.684 - 12392.262: 98.0135% ( 26) 00:08:00.689 12392.262 - 12451.840: 98.1862% ( 22) 00:08:00.689 12451.840 - 12511.418: 98.2962% ( 14) 00:08:00.689 12511.418 - 12570.996: 98.3904% ( 12) 00:08:00.689 12570.996 - 12630.575: 98.5160% ( 16) 00:08:00.689 12630.575 - 12690.153: 98.5867% ( 9) 00:08:00.689 12690.153 - 12749.731: 98.6731% ( 11) 00:08:00.689 12749.731 - 12809.309: 98.7359% ( 8) 00:08:00.689 12809.309 - 12868.887: 98.7987% ( 8) 00:08:00.689 12868.887 - 12928.465: 98.8536% ( 7) 00:08:00.689 12928.465 - 12988.044: 98.9008% ( 6) 00:08:00.689 12988.044 - 13047.622: 98.9322% ( 4) 00:08:00.689 13047.622 - 13107.200: 98.9557% ( 3) 00:08:00.689 13107.200 - 13166.778: 98.9793% ( 3) 00:08:00.689 13166.778 - 13226.356: 98.9871% ( 1) 00:08:00.689 13226.356 - 13285.935: 98.9950% ( 1) 00:08:00.689 24069.585 - 24188.742: 99.0028% ( 1) 00:08:00.689 24188.742 - 24307.898: 99.0185% ( 2) 00:08:00.689 24307.898 - 24427.055: 99.0499% ( 4) 00:08:00.689 24427.055 - 24546.211: 99.0813% ( 4) 00:08:00.689 24546.211 - 24665.367: 99.1128% ( 4) 00:08:00.689 24665.367 - 24784.524: 99.1520% ( 5) 00:08:00.689 24784.524 - 24903.680: 99.1756% ( 3) 00:08:00.689 24903.680 - 25022.836: 99.2070% ( 4) 00:08:00.689 25022.836 - 25141.993: 99.2462% ( 5) 00:08:00.689 25141.993 - 25261.149: 99.2776% ( 4) 00:08:00.689 25261.149 - 25380.305: 99.3169% ( 5) 00:08:00.689 25380.305 - 25499.462: 99.3483% ( 4) 00:08:00.689 25499.462 - 25618.618: 99.3876% ( 5) 00:08:00.689 25618.618 - 25737.775: 99.4190% ( 4) 00:08:00.689 25737.775 - 25856.931: 99.4582% ( 5) 00:08:00.689 25856.931 - 25976.087: 99.4739% ( 2) 00:08:00.689 25976.087 - 26095.244: 99.4975% ( 3) 00:08:00.689 31695.593 - 31933.905: 99.5289% ( 4) 00:08:00.689 31933.905 - 32172.218: 99.5839% ( 7) 00:08:00.689 32172.218 - 32410.531: 99.6545% ( 9) 00:08:00.689 32410.531 - 32648.844: 99.7016% ( 6) 00:08:00.689 32648.844 - 32887.156: 99.7566% ( 7) 00:08:00.689 32887.156 - 33125.469: 99.8194% ( 8) 00:08:00.689 33125.469 - 33363.782: 99.8665% ( 6) 00:08:00.689 33363.782 - 33602.095: 99.9372% ( 9) 00:08:00.689 33602.095 - 33840.407: 99.9921% ( 7) 00:08:00.689 33840.407 - 34078.720: 100.0000% ( 1) 00:08:00.689 00:08:00.689 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.689 ============================================================================== 00:08:00.689 Range in us Cumulative IO count 00:08:00.689 7983.476 - 8043.055: 0.0314% ( 4) 00:08:00.689 8043.055 - 8102.633: 0.1099% ( 10) 00:08:00.689 8102.633 - 8162.211: 0.3376% ( 29) 00:08:00.689 8162.211 - 8221.789: 0.6988% ( 46) 00:08:00.689 8221.789 - 8281.367: 1.4918% ( 101) 00:08:00.689 8281.367 - 8340.945: 2.6146% ( 143) 00:08:00.689 8340.945 - 8400.524: 3.9494% ( 170) 00:08:00.689 8400.524 - 8460.102: 5.4099% ( 186) 00:08:00.689 8460.102 - 8519.680: 7.3492% ( 247) 00:08:00.689 8519.680 - 8579.258: 9.8540% ( 319) 00:08:00.689 8579.258 - 8638.836: 12.4136% ( 326) 00:08:00.689 8638.836 - 8698.415: 15.0204% ( 332) 00:08:00.689 8698.415 - 8757.993: 17.6508% ( 335) 00:08:00.689 8757.993 - 8817.571: 20.0769% ( 309) 00:08:00.689 8817.571 - 8877.149: 22.3068% ( 284) 00:08:00.689 8877.149 - 8936.727: 24.3797% ( 264) 00:08:00.689 8936.727 - 8996.305: 25.9658% ( 202) 00:08:00.689 8996.305 - 9055.884: 27.4497% ( 189) 00:08:00.689 9055.884 - 9115.462: 28.7060% ( 160) 00:08:00.689 9115.462 - 9175.040: 29.8602% ( 147) 00:08:00.689 9175.040 - 9234.618: 30.7867% ( 118) 00:08:00.689 9234.618 - 9294.196: 31.5013% ( 91) 00:08:00.689 9294.196 - 9353.775: 32.0587% ( 71) 00:08:00.689 9353.775 - 9413.353: 32.4984% ( 56) 00:08:00.689 9413.353 - 9472.931: 32.9774% ( 61) 00:08:00.689 9472.931 - 9532.509: 33.4171% ( 56) 00:08:00.689 9532.509 - 9592.087: 34.0452% ( 80) 00:08:00.689 9592.087 - 9651.665: 34.8304% ( 100) 00:08:00.689 9651.665 - 9711.244: 35.6548% ( 105) 00:08:00.689 9711.244 - 9770.822: 36.9111% ( 160) 00:08:00.689 9770.822 - 9830.400: 38.5050% ( 203) 00:08:00.689 9830.400 - 9889.978: 40.3894% ( 240) 00:08:00.689 9889.978 - 9949.556: 43.1297% ( 349) 00:08:00.689 9949.556 - 10009.135: 46.0270% ( 369) 00:08:00.689 10009.135 - 10068.713: 48.9243% ( 369) 00:08:00.689 10068.713 - 10128.291: 52.1435% ( 410) 00:08:00.689 10128.291 - 10187.869: 55.5826% ( 438) 00:08:00.689 10187.869 - 10247.447: 59.1552% ( 455) 00:08:00.689 10247.447 - 10307.025: 62.9161% ( 479) 00:08:00.689 10307.025 - 10366.604: 66.4023% ( 444) 00:08:00.689 10366.604 - 10426.182: 69.6294% ( 411) 00:08:00.689 10426.182 - 10485.760: 72.6288% ( 382) 00:08:00.689 10485.760 - 10545.338: 75.5496% ( 372) 00:08:00.689 10545.338 - 10604.916: 78.0308% ( 316) 00:08:00.689 10604.916 - 10664.495: 80.3313% ( 293) 00:08:00.689 10664.495 - 10724.073: 82.5377% ( 281) 00:08:00.689 10724.073 - 10783.651: 84.4692% ( 246) 00:08:00.689 10783.651 - 10843.229: 86.0553% ( 202) 00:08:00.689 10843.229 - 10902.807: 87.5000% ( 184) 00:08:00.689 10902.807 - 10962.385: 88.7013% ( 153) 00:08:00.689 10962.385 - 11021.964: 89.8084% ( 141) 00:08:00.689 11021.964 - 11081.542: 90.8056% ( 127) 00:08:00.689 11081.542 - 11141.120: 91.5986% ( 101) 00:08:00.689 11141.120 - 11200.698: 92.2582% ( 84) 00:08:00.689 11200.698 - 11260.276: 92.8235% ( 72) 00:08:00.689 11260.276 - 11319.855: 93.2553% ( 55) 00:08:00.689 11319.855 - 11379.433: 93.7186% ( 59) 00:08:00.689 11379.433 - 11439.011: 94.1190% ( 51) 00:08:00.689 11439.011 - 11498.589: 94.4959% ( 48) 00:08:00.689 11498.589 - 11558.167: 94.8807% ( 49) 00:08:00.689 11558.167 - 11617.745: 95.2183% ( 43) 00:08:00.689 11617.745 - 11677.324: 95.5323% ( 40) 00:08:00.689 11677.324 - 11736.902: 95.7522% ( 28) 00:08:00.689 11736.902 - 11796.480: 96.0192% ( 34) 00:08:00.689 11796.480 - 11856.058: 96.3803% ( 46) 00:08:00.689 11856.058 - 11915.636: 96.6394% ( 33) 00:08:00.689 11915.636 - 11975.215: 96.9064% ( 34) 00:08:00.689 11975.215 - 12034.793: 97.2048% ( 38) 00:08:00.689 12034.793 - 12094.371: 97.3854% ( 23) 00:08:00.689 12094.371 - 12153.949: 97.5581% ( 22) 00:08:00.689 12153.949 - 12213.527: 97.7622% ( 26) 00:08:00.689 12213.527 - 12273.105: 97.9507% ( 24) 00:08:00.689 12273.105 - 12332.684: 98.1234% ( 22) 00:08:00.689 12332.684 - 12392.262: 98.2334% ( 14) 00:08:00.689 12392.262 - 12451.840: 98.3511% ( 15) 00:08:00.689 12451.840 - 12511.418: 98.4454% ( 12) 00:08:00.689 12511.418 - 12570.996: 98.5317% ( 11) 00:08:00.689 12570.996 - 12630.575: 98.6181% ( 11) 00:08:00.689 12630.575 - 12690.153: 98.7202% ( 13) 00:08:00.689 12690.153 - 12749.731: 98.7908% ( 9) 00:08:00.689 12749.731 - 12809.309: 98.8536% ( 8) 00:08:00.689 12809.309 - 12868.887: 98.9008% ( 6) 00:08:00.689 12868.887 - 12928.465: 98.9322% ( 4) 00:08:00.689 12928.465 - 12988.044: 98.9557% ( 3) 00:08:00.689 12988.044 - 13047.622: 98.9793% ( 3) 00:08:00.689 13047.622 - 13107.200: 98.9950% ( 2) 00:08:00.689 23950.429 - 24069.585: 99.0185% ( 3) 00:08:00.689 24069.585 - 24188.742: 99.0578% ( 5) 00:08:00.689 24188.742 - 24307.898: 99.0892% ( 4) 00:08:00.689 24307.898 - 24427.055: 99.1206% ( 4) 00:08:00.689 24427.055 - 24546.211: 99.1599% ( 5) 00:08:00.689 24546.211 - 24665.367: 99.1991% ( 5) 00:08:00.689 24665.367 - 24784.524: 99.2305% ( 4) 00:08:00.689 24784.524 - 24903.680: 99.2698% ( 5) 00:08:00.689 24903.680 - 25022.836: 99.3090% ( 5) 00:08:00.689 25022.836 - 25141.993: 99.3483% ( 5) 00:08:00.689 25141.993 - 25261.149: 99.3797% ( 4) 00:08:00.689 25261.149 - 25380.305: 99.4190% ( 5) 00:08:00.689 25380.305 - 25499.462: 99.4582% ( 5) 00:08:00.689 25499.462 - 25618.618: 99.4896% ( 4) 00:08:00.689 25618.618 - 25737.775: 99.4975% ( 1) 00:08:00.689 30384.873 - 30504.029: 99.5210% ( 3) 00:08:00.689 30504.029 - 30742.342: 99.5917% ( 9) 00:08:00.689 30742.342 - 30980.655: 99.6467% ( 7) 00:08:00.689 30980.655 - 31218.967: 99.7095% ( 8) 00:08:00.690 31218.967 - 31457.280: 99.7802% ( 9) 00:08:00.690 31457.280 - 31695.593: 99.8508% ( 9) 00:08:00.690 31695.593 - 31933.905: 99.9136% ( 8) 00:08:00.690 31933.905 - 32172.218: 99.9843% ( 9) 00:08:00.690 32172.218 - 32410.531: 100.0000% ( 2) 00:08:00.690 00:08:00.690 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.690 ============================================================================== 00:08:00.690 Range in us Cumulative IO count 00:08:00.690 7923.898 - 7983.476: 0.0079% ( 1) 00:08:00.690 7983.476 - 8043.055: 0.0314% ( 3) 00:08:00.690 8043.055 - 8102.633: 0.2513% ( 28) 00:08:00.690 8102.633 - 8162.211: 0.6203% ( 47) 00:08:00.690 8162.211 - 8221.789: 1.0600% ( 56) 00:08:00.690 8221.789 - 8281.367: 1.6960% ( 81) 00:08:00.690 8281.367 - 8340.945: 2.6225% ( 118) 00:08:00.690 8340.945 - 8400.524: 4.1614% ( 196) 00:08:00.690 8400.524 - 8460.102: 6.0066% ( 235) 00:08:00.690 8460.102 - 8519.680: 8.1030% ( 267) 00:08:00.690 8519.680 - 8579.258: 10.4821% ( 303) 00:08:00.690 8579.258 - 8638.836: 13.0104% ( 322) 00:08:00.690 8638.836 - 8698.415: 15.4444% ( 310) 00:08:00.690 8698.415 - 8757.993: 17.7607% ( 295) 00:08:00.690 8757.993 - 8817.571: 20.0848% ( 296) 00:08:00.690 8817.571 - 8877.149: 22.2676% ( 278) 00:08:00.690 8877.149 - 8936.727: 24.2384% ( 251) 00:08:00.690 8936.727 - 8996.305: 26.0207% ( 227) 00:08:00.690 8996.305 - 9055.884: 27.4969% ( 188) 00:08:00.690 9055.884 - 9115.462: 28.8081% ( 167) 00:08:00.690 9115.462 - 9175.040: 29.8524% ( 133) 00:08:00.690 9175.040 - 9234.618: 30.7082% ( 109) 00:08:00.690 9234.618 - 9294.196: 31.4463% ( 94) 00:08:00.690 9294.196 - 9353.775: 32.0430% ( 76) 00:08:00.690 9353.775 - 9413.353: 32.5377% ( 63) 00:08:00.690 9413.353 - 9472.931: 33.0088% ( 60) 00:08:00.690 9472.931 - 9532.509: 33.7233% ( 91) 00:08:00.690 9532.509 - 9592.087: 34.3750% ( 83) 00:08:00.690 9592.087 - 9651.665: 35.0503% ( 86) 00:08:00.690 9651.665 - 9711.244: 36.1731% ( 143) 00:08:00.690 9711.244 - 9770.822: 37.5550% ( 176) 00:08:00.690 9770.822 - 9830.400: 39.4629% ( 243) 00:08:00.690 9830.400 - 9889.978: 41.6457% ( 278) 00:08:00.690 9889.978 - 9949.556: 43.9777% ( 297) 00:08:00.690 9949.556 - 10009.135: 46.5845% ( 332) 00:08:00.690 10009.135 - 10068.713: 49.4111% ( 360) 00:08:00.690 10068.713 - 10128.291: 52.3712% ( 377) 00:08:00.690 10128.291 - 10187.869: 55.5983% ( 411) 00:08:00.690 10187.869 - 10247.447: 59.0923% ( 445) 00:08:00.690 10247.447 - 10307.025: 62.8141% ( 474) 00:08:00.690 10307.025 - 10366.604: 66.1511% ( 425) 00:08:00.690 10366.604 - 10426.182: 69.3781% ( 411) 00:08:00.690 10426.182 - 10485.760: 72.2519% ( 366) 00:08:00.690 10485.760 - 10545.338: 75.0000% ( 350) 00:08:00.690 10545.338 - 10604.916: 77.5754% ( 328) 00:08:00.690 10604.916 - 10664.495: 79.8681% ( 292) 00:08:00.690 10664.495 - 10724.073: 81.9959% ( 271) 00:08:00.690 10724.073 - 10783.651: 83.9981% ( 255) 00:08:00.690 10783.651 - 10843.229: 85.6627% ( 212) 00:08:00.690 10843.229 - 10902.807: 87.1074% ( 184) 00:08:00.690 10902.807 - 10962.385: 88.3244% ( 155) 00:08:00.690 10962.385 - 11021.964: 89.4001% ( 137) 00:08:00.690 11021.964 - 11081.542: 90.4523% ( 134) 00:08:00.690 11081.542 - 11141.120: 91.2531% ( 102) 00:08:00.690 11141.120 - 11200.698: 92.0854% ( 106) 00:08:00.690 11200.698 - 11260.276: 92.8470% ( 97) 00:08:00.690 11260.276 - 11319.855: 93.4281% ( 74) 00:08:00.690 11319.855 - 11379.433: 93.9149% ( 62) 00:08:00.690 11379.433 - 11439.011: 94.3546% ( 56) 00:08:00.690 11439.011 - 11498.589: 94.6765% ( 41) 00:08:00.690 11498.589 - 11558.167: 95.0769% ( 51) 00:08:00.690 11558.167 - 11617.745: 95.3125% ( 30) 00:08:00.690 11617.745 - 11677.324: 95.5323% ( 28) 00:08:00.690 11677.324 - 11736.902: 95.8150% ( 36) 00:08:00.690 11736.902 - 11796.480: 96.0270% ( 27) 00:08:00.690 11796.480 - 11856.058: 96.2390% ( 27) 00:08:00.690 11856.058 - 11915.636: 96.6159% ( 48) 00:08:00.690 11915.636 - 11975.215: 96.8200% ( 26) 00:08:00.690 11975.215 - 12034.793: 97.0399% ( 28) 00:08:00.690 12034.793 - 12094.371: 97.2833% ( 31) 00:08:00.690 12094.371 - 12153.949: 97.4560% ( 22) 00:08:00.690 12153.949 - 12213.527: 97.6131% ( 20) 00:08:00.690 12213.527 - 12273.105: 97.7937% ( 23) 00:08:00.690 12273.105 - 12332.684: 98.0371% ( 31) 00:08:00.690 12332.684 - 12392.262: 98.2177% ( 23) 00:08:00.690 12392.262 - 12451.840: 98.3511% ( 17) 00:08:00.690 12451.840 - 12511.418: 98.4532% ( 13) 00:08:00.690 12511.418 - 12570.996: 98.6024% ( 19) 00:08:00.690 12570.996 - 12630.575: 98.7123% ( 14) 00:08:00.690 12630.575 - 12690.153: 98.7830% ( 9) 00:08:00.690 12690.153 - 12749.731: 98.8536% ( 9) 00:08:00.690 12749.731 - 12809.309: 98.8929% ( 5) 00:08:00.690 12809.309 - 12868.887: 98.9243% ( 4) 00:08:00.690 12868.887 - 12928.465: 98.9479% ( 3) 00:08:00.690 12928.465 - 12988.044: 98.9714% ( 3) 00:08:00.690 12988.044 - 13047.622: 98.9871% ( 2) 00:08:00.690 13047.622 - 13107.200: 98.9950% ( 1) 00:08:00.690 22997.178 - 23116.335: 99.0107% ( 2) 00:08:00.690 23116.335 - 23235.491: 99.0421% ( 4) 00:08:00.690 23235.491 - 23354.647: 99.0735% ( 4) 00:08:00.690 23354.647 - 23473.804: 99.1128% ( 5) 00:08:00.690 23473.804 - 23592.960: 99.1442% ( 4) 00:08:00.690 23592.960 - 23712.116: 99.1834% ( 5) 00:08:00.690 23712.116 - 23831.273: 99.2227% ( 5) 00:08:00.690 23831.273 - 23950.429: 99.2619% ( 5) 00:08:00.690 23950.429 - 24069.585: 99.3012% ( 5) 00:08:00.690 24069.585 - 24188.742: 99.3326% ( 4) 00:08:00.690 24188.742 - 24307.898: 99.3640% ( 4) 00:08:00.690 24307.898 - 24427.055: 99.4033% ( 5) 00:08:00.690 24427.055 - 24546.211: 99.4425% ( 5) 00:08:00.690 24546.211 - 24665.367: 99.4818% ( 5) 00:08:00.690 24665.367 - 24784.524: 99.4975% ( 2) 00:08:00.690 29431.622 - 29550.778: 99.5289% ( 4) 00:08:00.690 29550.778 - 29669.935: 99.5603% ( 4) 00:08:00.690 29669.935 - 29789.091: 99.5996% ( 5) 00:08:00.690 29789.091 - 29908.247: 99.6231% ( 3) 00:08:00.690 29908.247 - 30027.404: 99.6624% ( 5) 00:08:00.690 30027.404 - 30146.560: 99.6938% ( 4) 00:08:00.690 30146.560 - 30265.716: 99.7252% ( 4) 00:08:00.690 30265.716 - 30384.873: 99.7566% ( 4) 00:08:00.690 30384.873 - 30504.029: 99.7880% ( 4) 00:08:00.690 30504.029 - 30742.342: 99.8587% ( 9) 00:08:00.690 30742.342 - 30980.655: 99.9293% ( 9) 00:08:00.690 30980.655 - 31218.967: 99.9843% ( 7) 00:08:00.690 31218.967 - 31457.280: 100.0000% ( 2) 00:08:00.690 00:08:00.690 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.690 ============================================================================== 00:08:00.690 Range in us Cumulative IO count 00:08:00.690 7923.898 - 7983.476: 0.0079% ( 1) 00:08:00.690 7983.476 - 8043.055: 0.0550% ( 6) 00:08:00.690 8043.055 - 8102.633: 0.1649% ( 14) 00:08:00.690 8102.633 - 8162.211: 0.3612% ( 25) 00:08:00.690 8162.211 - 8221.789: 0.7930% ( 55) 00:08:00.690 8221.789 - 8281.367: 1.4055% ( 78) 00:08:00.690 8281.367 - 8340.945: 2.5361% ( 144) 00:08:00.690 8340.945 - 8400.524: 3.9808% ( 184) 00:08:00.690 8400.524 - 8460.102: 5.8024% ( 232) 00:08:00.690 8460.102 - 8519.680: 7.8832% ( 265) 00:08:00.690 8519.680 - 8579.258: 10.4978% ( 333) 00:08:00.690 8579.258 - 8638.836: 12.7984% ( 293) 00:08:00.690 8638.836 - 8698.415: 15.2717% ( 315) 00:08:00.690 8698.415 - 8757.993: 17.6900% ( 308) 00:08:00.690 8757.993 - 8817.571: 19.9906% ( 293) 00:08:00.690 8817.571 - 8877.149: 21.8986% ( 243) 00:08:00.690 8877.149 - 8936.727: 23.9636% ( 263) 00:08:00.690 8936.727 - 8996.305: 25.8087% ( 235) 00:08:00.690 8996.305 - 9055.884: 27.5047% ( 216) 00:08:00.690 9055.884 - 9115.462: 28.9494% ( 184) 00:08:00.690 9115.462 - 9175.040: 30.2999% ( 172) 00:08:00.690 9175.040 - 9234.618: 31.1950% ( 114) 00:08:00.690 9234.618 - 9294.196: 31.7918% ( 76) 00:08:00.690 9294.196 - 9353.775: 32.3178% ( 67) 00:08:00.690 9353.775 - 9413.353: 32.7261% ( 52) 00:08:00.690 9413.353 - 9472.931: 33.2365% ( 65) 00:08:00.690 9472.931 - 9532.509: 33.6919% ( 58) 00:08:00.690 9532.509 - 9592.087: 34.2180% ( 67) 00:08:00.690 9592.087 - 9651.665: 34.8932% ( 86) 00:08:00.690 9651.665 - 9711.244: 35.8590% ( 123) 00:08:00.690 9711.244 - 9770.822: 37.0839% ( 156) 00:08:00.690 9770.822 - 9830.400: 38.7170% ( 208) 00:08:00.690 9830.400 - 9889.978: 40.7663% ( 261) 00:08:00.690 9889.978 - 9949.556: 43.5459% ( 354) 00:08:00.690 9949.556 - 10009.135: 46.6630% ( 397) 00:08:00.690 10009.135 - 10068.713: 50.1570% ( 445) 00:08:00.690 10068.713 - 10128.291: 53.3684% ( 409) 00:08:00.690 10128.291 - 10187.869: 56.6897% ( 423) 00:08:00.690 10187.869 - 10247.447: 59.6969% ( 383) 00:08:00.690 10247.447 - 10307.025: 62.8141% ( 397) 00:08:00.690 10307.025 - 10366.604: 65.8763% ( 390) 00:08:00.690 10366.604 - 10426.182: 68.8756% ( 382) 00:08:00.690 10426.182 - 10485.760: 71.7572% ( 367) 00:08:00.690 10485.760 - 10545.338: 74.4739% ( 346) 00:08:00.690 10545.338 - 10604.916: 77.0415% ( 327) 00:08:00.690 10604.916 - 10664.495: 79.4991% ( 313) 00:08:00.690 10664.495 - 10724.073: 81.7211% ( 283) 00:08:00.690 10724.073 - 10783.651: 83.7547% ( 259) 00:08:00.690 10783.651 - 10843.229: 85.4271% ( 213) 00:08:00.690 10843.229 - 10902.807: 86.8640% ( 183) 00:08:00.690 10902.807 - 10962.385: 88.2773% ( 180) 00:08:00.690 10962.385 - 11021.964: 89.4080% ( 144) 00:08:00.690 11021.964 - 11081.542: 90.3188% ( 116) 00:08:00.690 11081.542 - 11141.120: 91.1118% ( 101) 00:08:00.691 11141.120 - 11200.698: 91.7557% ( 82) 00:08:00.691 11200.698 - 11260.276: 92.2974% ( 69) 00:08:00.691 11260.276 - 11319.855: 92.7764% ( 61) 00:08:00.691 11319.855 - 11379.433: 93.1768% ( 51) 00:08:00.691 11379.433 - 11439.011: 93.6165% ( 56) 00:08:00.691 11439.011 - 11498.589: 94.1190% ( 64) 00:08:00.691 11498.589 - 11558.167: 94.6451% ( 67) 00:08:00.691 11558.167 - 11617.745: 95.0455% ( 51) 00:08:00.691 11617.745 - 11677.324: 95.5402% ( 63) 00:08:00.691 11677.324 - 11736.902: 96.0741% ( 68) 00:08:00.691 11736.902 - 11796.480: 96.4824% ( 52) 00:08:00.691 11796.480 - 11856.058: 96.7729% ( 37) 00:08:00.691 11856.058 - 11915.636: 97.0163% ( 31) 00:08:00.691 11915.636 - 11975.215: 97.3697% ( 45) 00:08:00.691 11975.215 - 12034.793: 97.6445% ( 35) 00:08:00.691 12034.793 - 12094.371: 97.8643% ( 28) 00:08:00.691 12094.371 - 12153.949: 98.0292% ( 21) 00:08:00.691 12153.949 - 12213.527: 98.1784% ( 19) 00:08:00.691 12213.527 - 12273.105: 98.3590% ( 23) 00:08:00.691 12273.105 - 12332.684: 98.4768% ( 15) 00:08:00.691 12332.684 - 12392.262: 98.5553% ( 10) 00:08:00.691 12392.262 - 12451.840: 98.6259% ( 9) 00:08:00.691 12451.840 - 12511.418: 98.6966% ( 9) 00:08:00.691 12511.418 - 12570.996: 98.7594% ( 8) 00:08:00.691 12570.996 - 12630.575: 98.8222% ( 8) 00:08:00.691 12630.575 - 12690.153: 98.8693% ( 6) 00:08:00.691 12690.153 - 12749.731: 98.9243% ( 7) 00:08:00.691 12749.731 - 12809.309: 98.9479% ( 3) 00:08:00.691 12809.309 - 12868.887: 98.9636% ( 2) 00:08:00.691 12868.887 - 12928.465: 98.9871% ( 3) 00:08:00.691 12928.465 - 12988.044: 98.9950% ( 1) 00:08:00.691 21328.989 - 21448.145: 99.0185% ( 3) 00:08:00.691 21448.145 - 21567.302: 99.0578% ( 5) 00:08:00.691 21567.302 - 21686.458: 99.0892% ( 4) 00:08:00.691 21686.458 - 21805.615: 99.1285% ( 5) 00:08:00.691 21805.615 - 21924.771: 99.1677% ( 5) 00:08:00.691 21924.771 - 22043.927: 99.1991% ( 4) 00:08:00.691 22043.927 - 22163.084: 99.2305% ( 4) 00:08:00.691 22163.084 - 22282.240: 99.2619% ( 4) 00:08:00.691 22282.240 - 22401.396: 99.2933% ( 4) 00:08:00.691 22401.396 - 22520.553: 99.3247% ( 4) 00:08:00.691 22520.553 - 22639.709: 99.3562% ( 4) 00:08:00.691 22639.709 - 22758.865: 99.3797% ( 3) 00:08:00.691 22758.865 - 22878.022: 99.4111% ( 4) 00:08:00.691 22878.022 - 22997.178: 99.4425% ( 4) 00:08:00.691 22997.178 - 23116.335: 99.4739% ( 4) 00:08:00.691 23116.335 - 23235.491: 99.4975% ( 3) 00:08:00.691 27882.589 - 28001.745: 99.5053% ( 1) 00:08:00.691 28001.745 - 28120.902: 99.5132% ( 1) 00:08:00.691 28120.902 - 28240.058: 99.5446% ( 4) 00:08:00.691 28240.058 - 28359.215: 99.5682% ( 3) 00:08:00.691 28359.215 - 28478.371: 99.5996% ( 4) 00:08:00.691 28478.371 - 28597.527: 99.6231% ( 3) 00:08:00.691 28597.527 - 28716.684: 99.6467% ( 3) 00:08:00.691 28716.684 - 28835.840: 99.6781% ( 4) 00:08:00.691 28835.840 - 28954.996: 99.7016% ( 3) 00:08:00.691 28954.996 - 29074.153: 99.7252% ( 3) 00:08:00.691 29074.153 - 29193.309: 99.7487% ( 3) 00:08:00.691 29193.309 - 29312.465: 99.7802% ( 4) 00:08:00.691 29312.465 - 29431.622: 99.8037% ( 3) 00:08:00.691 29431.622 - 29550.778: 99.8351% ( 4) 00:08:00.691 29550.778 - 29669.935: 99.8587% ( 3) 00:08:00.691 29669.935 - 29789.091: 99.8901% ( 4) 00:08:00.691 29789.091 - 29908.247: 99.9136% ( 3) 00:08:00.691 29908.247 - 30027.404: 99.9372% ( 3) 00:08:00.691 30027.404 - 30146.560: 99.9607% ( 3) 00:08:00.691 30146.560 - 30265.716: 99.9921% ( 4) 00:08:00.691 30265.716 - 30384.873: 100.0000% ( 1) 00:08:00.691 00:08:00.691 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.691 ============================================================================== 00:08:00.691 Range in us Cumulative IO count 00:08:00.691 7864.320 - 7923.898: 0.0079% ( 1) 00:08:00.691 7923.898 - 7983.476: 0.0157% ( 1) 00:08:00.691 7983.476 - 8043.055: 0.0864% ( 9) 00:08:00.691 8043.055 - 8102.633: 0.2434% ( 20) 00:08:00.691 8102.633 - 8162.211: 0.5653% ( 41) 00:08:00.691 8162.211 - 8221.789: 1.0050% ( 56) 00:08:00.691 8221.789 - 8281.367: 1.5939% ( 75) 00:08:00.691 8281.367 - 8340.945: 2.5832% ( 126) 00:08:00.691 8340.945 - 8400.524: 4.0280% ( 184) 00:08:00.691 8400.524 - 8460.102: 5.7632% ( 221) 00:08:00.691 8460.102 - 8519.680: 7.7183% ( 249) 00:08:00.691 8519.680 - 8579.258: 10.2230% ( 319) 00:08:00.691 8579.258 - 8638.836: 12.5550% ( 297) 00:08:00.691 8638.836 - 8698.415: 14.9576% ( 306) 00:08:00.691 8698.415 - 8757.993: 17.5094% ( 325) 00:08:00.691 8757.993 - 8817.571: 20.0769% ( 327) 00:08:00.691 8817.571 - 8877.149: 22.3147% ( 285) 00:08:00.691 8877.149 - 8936.727: 24.3640% ( 261) 00:08:00.691 8936.727 - 8996.305: 26.2249% ( 237) 00:08:00.691 8996.305 - 9055.884: 27.7246% ( 191) 00:08:00.691 9055.884 - 9115.462: 29.1300% ( 179) 00:08:00.691 9115.462 - 9175.040: 30.1822% ( 134) 00:08:00.691 9175.040 - 9234.618: 31.0223% ( 107) 00:08:00.691 9234.618 - 9294.196: 31.6504% ( 80) 00:08:00.691 9294.196 - 9353.775: 32.2629% ( 78) 00:08:00.691 9353.775 - 9413.353: 32.7654% ( 64) 00:08:00.691 9413.353 - 9472.931: 33.1737% ( 52) 00:08:00.691 9472.931 - 9532.509: 33.6369% ( 59) 00:08:00.691 9532.509 - 9592.087: 34.2651% ( 80) 00:08:00.691 9592.087 - 9651.665: 35.1209% ( 109) 00:08:00.691 9651.665 - 9711.244: 36.0788% ( 122) 00:08:00.691 9711.244 - 9770.822: 37.4529% ( 175) 00:08:00.691 9770.822 - 9830.400: 39.2588% ( 230) 00:08:00.691 9830.400 - 9889.978: 41.3866% ( 271) 00:08:00.691 9889.978 - 9949.556: 43.7971% ( 307) 00:08:00.691 9949.556 - 10009.135: 46.5923% ( 356) 00:08:00.691 10009.135 - 10068.713: 49.7487% ( 402) 00:08:00.691 10068.713 - 10128.291: 52.9837% ( 412) 00:08:00.691 10128.291 - 10187.869: 56.2107% ( 411) 00:08:00.691 10187.869 - 10247.447: 59.3436% ( 399) 00:08:00.691 10247.447 - 10307.025: 62.4058% ( 390) 00:08:00.691 10307.025 - 10366.604: 65.6329% ( 411) 00:08:00.691 10366.604 - 10426.182: 68.7421% ( 396) 00:08:00.691 10426.182 - 10485.760: 71.7729% ( 386) 00:08:00.691 10485.760 - 10545.338: 74.5839% ( 358) 00:08:00.691 10545.338 - 10604.916: 77.2692% ( 342) 00:08:00.691 10604.916 - 10664.495: 79.4834% ( 282) 00:08:00.691 10664.495 - 10724.073: 81.6583% ( 277) 00:08:00.691 10724.073 - 10783.651: 83.6369% ( 252) 00:08:00.691 10783.651 - 10843.229: 85.3015% ( 212) 00:08:00.691 10843.229 - 10902.807: 86.7698% ( 187) 00:08:00.691 10902.807 - 10962.385: 87.9633% ( 152) 00:08:00.691 10962.385 - 11021.964: 89.0154% ( 134) 00:08:00.691 11021.964 - 11081.542: 90.0361% ( 130) 00:08:00.691 11081.542 - 11141.120: 90.9155% ( 112) 00:08:00.691 11141.120 - 11200.698: 91.6614% ( 95) 00:08:00.691 11200.698 - 11260.276: 92.2111% ( 70) 00:08:00.691 11260.276 - 11319.855: 92.8470% ( 81) 00:08:00.691 11319.855 - 11379.433: 93.3496% ( 64) 00:08:00.691 11379.433 - 11439.011: 93.8128% ( 59) 00:08:00.691 11439.011 - 11498.589: 94.2761% ( 59) 00:08:00.691 11498.589 - 11558.167: 94.7786% ( 64) 00:08:00.691 11558.167 - 11617.745: 95.2340% ( 58) 00:08:00.691 11617.745 - 11677.324: 95.6109% ( 48) 00:08:00.691 11677.324 - 11736.902: 96.1134% ( 64) 00:08:00.691 11736.902 - 11796.480: 96.3960% ( 36) 00:08:00.691 11796.480 - 11856.058: 96.7180% ( 41) 00:08:00.691 11856.058 - 11915.636: 97.0320% ( 40) 00:08:00.691 11915.636 - 11975.215: 97.3383% ( 39) 00:08:00.691 11975.215 - 12034.793: 97.6052% ( 34) 00:08:00.691 12034.793 - 12094.371: 97.7937% ( 24) 00:08:00.691 12094.371 - 12153.949: 97.9978% ( 26) 00:08:00.691 12153.949 - 12213.527: 98.1391% ( 18) 00:08:00.691 12213.527 - 12273.105: 98.2491% ( 14) 00:08:00.691 12273.105 - 12332.684: 98.3197% ( 9) 00:08:00.691 12332.684 - 12392.262: 98.4139% ( 12) 00:08:00.691 12392.262 - 12451.840: 98.5082% ( 12) 00:08:00.691 12451.840 - 12511.418: 98.6024% ( 12) 00:08:00.691 12511.418 - 12570.996: 98.6731% ( 9) 00:08:00.691 12570.996 - 12630.575: 98.7516% ( 10) 00:08:00.691 12630.575 - 12690.153: 98.8222% ( 9) 00:08:00.691 12690.153 - 12749.731: 98.8693% ( 6) 00:08:00.691 12749.731 - 12809.309: 98.9086% ( 5) 00:08:00.691 12809.309 - 12868.887: 98.9243% ( 2) 00:08:00.691 12868.887 - 12928.465: 98.9400% ( 2) 00:08:00.691 12928.465 - 12988.044: 98.9479% ( 1) 00:08:00.691 12988.044 - 13047.622: 98.9636% ( 2) 00:08:00.691 13047.622 - 13107.200: 98.9714% ( 1) 00:08:00.691 13107.200 - 13166.778: 98.9871% ( 2) 00:08:00.691 13166.778 - 13226.356: 98.9950% ( 1) 00:08:00.691 20018.269 - 20137.425: 99.0421% ( 6) 00:08:00.691 20137.425 - 20256.582: 99.1285% ( 11) 00:08:00.691 20256.582 - 20375.738: 99.1520% ( 3) 00:08:00.691 20375.738 - 20494.895: 99.1756% ( 3) 00:08:00.691 20494.895 - 20614.051: 99.1913% ( 2) 00:08:00.691 20614.051 - 20733.207: 99.2227% ( 4) 00:08:00.691 20733.207 - 20852.364: 99.2541% ( 4) 00:08:00.691 20852.364 - 20971.520: 99.2855% ( 4) 00:08:00.691 20971.520 - 21090.676: 99.3169% ( 4) 00:08:00.691 21090.676 - 21209.833: 99.3483% ( 4) 00:08:00.692 21209.833 - 21328.989: 99.3876% ( 5) 00:08:00.692 21328.989 - 21448.145: 99.4190% ( 4) 00:08:00.692 21448.145 - 21567.302: 99.4582% ( 5) 00:08:00.692 21567.302 - 21686.458: 99.4896% ( 4) 00:08:00.692 21686.458 - 21805.615: 99.4975% ( 1) 00:08:00.692 26810.182 - 26929.338: 99.5132% ( 2) 00:08:00.692 26929.338 - 27048.495: 99.5289% ( 2) 00:08:00.692 27048.495 - 27167.651: 99.5603% ( 4) 00:08:00.692 27167.651 - 27286.807: 99.5917% ( 4) 00:08:00.692 27286.807 - 27405.964: 99.6153% ( 3) 00:08:00.692 27405.964 - 27525.120: 99.6388% ( 3) 00:08:00.692 27525.120 - 27644.276: 99.6702% ( 4) 00:08:00.692 27644.276 - 27763.433: 99.6938% ( 3) 00:08:00.692 27763.433 - 27882.589: 99.7252% ( 4) 00:08:00.692 27882.589 - 28001.745: 99.7487% ( 3) 00:08:00.692 28001.745 - 28120.902: 99.7723% ( 3) 00:08:00.692 28120.902 - 28240.058: 99.8037% ( 4) 00:08:00.692 28240.058 - 28359.215: 99.8351% ( 4) 00:08:00.692 28359.215 - 28478.371: 99.8744% ( 5) 00:08:00.692 28478.371 - 28597.527: 99.9058% ( 4) 00:08:00.692 28597.527 - 28716.684: 99.9372% ( 4) 00:08:00.692 28716.684 - 28835.840: 99.9764% ( 5) 00:08:00.692 28835.840 - 28954.996: 100.0000% ( 3) 00:08:00.692 00:08:00.692 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.692 ============================================================================== 00:08:00.692 Range in us Cumulative IO count 00:08:00.692 7864.320 - 7923.898: 0.0079% ( 1) 00:08:00.692 7983.476 - 8043.055: 0.0157% ( 1) 00:08:00.692 8043.055 - 8102.633: 0.1335% ( 15) 00:08:00.692 8102.633 - 8162.211: 0.3690% ( 30) 00:08:00.692 8162.211 - 8221.789: 0.6988% ( 42) 00:08:00.692 8221.789 - 8281.367: 1.3662% ( 85) 00:08:00.692 8281.367 - 8340.945: 2.5047% ( 145) 00:08:00.692 8340.945 - 8400.524: 4.1065% ( 204) 00:08:00.692 8400.524 - 8460.102: 5.9516% ( 235) 00:08:00.692 8460.102 - 8519.680: 7.7104% ( 224) 00:08:00.692 8519.680 - 8579.258: 9.6891% ( 252) 00:08:00.692 8579.258 - 8638.836: 12.0682% ( 303) 00:08:00.692 8638.836 - 8698.415: 14.4708% ( 306) 00:08:00.692 8698.415 - 8757.993: 16.9598% ( 317) 00:08:00.692 8757.993 - 8817.571: 19.2604% ( 293) 00:08:00.692 8817.571 - 8877.149: 21.6552% ( 305) 00:08:00.692 8877.149 - 8936.727: 23.9086% ( 287) 00:08:00.692 8936.727 - 8996.305: 25.9579% ( 261) 00:08:00.692 8996.305 - 9055.884: 27.6774% ( 219) 00:08:00.692 9055.884 - 9115.462: 29.0751% ( 178) 00:08:00.692 9115.462 - 9175.040: 30.3156% ( 158) 00:08:00.692 9175.040 - 9234.618: 31.2657% ( 121) 00:08:00.692 9234.618 - 9294.196: 31.8781% ( 78) 00:08:00.692 9294.196 - 9353.775: 32.3728% ( 63) 00:08:00.692 9353.775 - 9413.353: 32.8282% ( 58) 00:08:00.692 9413.353 - 9472.931: 33.2993% ( 60) 00:08:00.692 9472.931 - 9532.509: 33.9589% ( 84) 00:08:00.692 9532.509 - 9592.087: 34.6106% ( 83) 00:08:00.692 9592.087 - 9651.665: 35.2387% ( 80) 00:08:00.692 9651.665 - 9711.244: 36.2830% ( 133) 00:08:00.692 9711.244 - 9770.822: 37.7434% ( 186) 00:08:00.692 9770.822 - 9830.400: 39.2117% ( 187) 00:08:00.692 9830.400 - 9889.978: 41.1746% ( 250) 00:08:00.692 9889.978 - 9949.556: 43.7736% ( 331) 00:08:00.692 9949.556 - 10009.135: 46.5609% ( 355) 00:08:00.692 10009.135 - 10068.713: 49.7409% ( 405) 00:08:00.692 10068.713 - 10128.291: 53.1014% ( 428) 00:08:00.692 10128.291 - 10187.869: 56.4541% ( 427) 00:08:00.692 10187.869 - 10247.447: 59.9011% ( 439) 00:08:00.692 10247.447 - 10307.025: 63.0418% ( 400) 00:08:00.692 10307.025 - 10366.604: 66.1432% ( 395) 00:08:00.692 10366.604 - 10426.182: 69.1583% ( 384) 00:08:00.692 10426.182 - 10485.760: 72.2519% ( 394) 00:08:00.692 10485.760 - 10545.338: 75.0707% ( 359) 00:08:00.692 10545.338 - 10604.916: 77.4969% ( 309) 00:08:00.692 10604.916 - 10664.495: 79.8445% ( 299) 00:08:00.692 10664.495 - 10724.073: 82.0666% ( 283) 00:08:00.692 10724.073 - 10783.651: 83.9981% ( 246) 00:08:00.692 10783.651 - 10843.229: 85.6784% ( 214) 00:08:00.692 10843.229 - 10902.807: 87.1938% ( 193) 00:08:00.692 10902.807 - 10962.385: 88.4501% ( 160) 00:08:00.692 10962.385 - 11021.964: 89.4629% ( 129) 00:08:00.692 11021.964 - 11081.542: 90.2481% ( 100) 00:08:00.692 11081.542 - 11141.120: 90.9391% ( 88) 00:08:00.692 11141.120 - 11200.698: 91.5044% ( 72) 00:08:00.692 11200.698 - 11260.276: 92.0776% ( 73) 00:08:00.692 11260.276 - 11319.855: 92.5958% ( 66) 00:08:00.692 11319.855 - 11379.433: 93.1140% ( 66) 00:08:00.692 11379.433 - 11439.011: 93.6322% ( 66) 00:08:00.692 11439.011 - 11498.589: 94.0484% ( 53) 00:08:00.692 11498.589 - 11558.167: 94.5273% ( 61) 00:08:00.692 11558.167 - 11617.745: 94.9670% ( 56) 00:08:00.692 11617.745 - 11677.324: 95.2889% ( 41) 00:08:00.692 11677.324 - 11736.902: 95.6109% ( 41) 00:08:00.692 11736.902 - 11796.480: 95.9956% ( 49) 00:08:00.692 11796.480 - 11856.058: 96.2312% ( 30) 00:08:00.692 11856.058 - 11915.636: 96.4824% ( 32) 00:08:00.692 11915.636 - 11975.215: 96.7886% ( 39) 00:08:00.692 11975.215 - 12034.793: 97.0556% ( 34) 00:08:00.692 12034.793 - 12094.371: 97.3540% ( 38) 00:08:00.692 12094.371 - 12153.949: 97.5817% ( 29) 00:08:00.692 12153.949 - 12213.527: 97.8094% ( 29) 00:08:00.692 12213.527 - 12273.105: 97.9664% ( 20) 00:08:00.692 12273.105 - 12332.684: 98.1391% ( 22) 00:08:00.692 12332.684 - 12392.262: 98.2726% ( 17) 00:08:00.692 12392.262 - 12451.840: 98.3747% ( 13) 00:08:00.692 12451.840 - 12511.418: 98.4689% ( 12) 00:08:00.692 12511.418 - 12570.996: 98.5474% ( 10) 00:08:00.692 12570.996 - 12630.575: 98.6181% ( 9) 00:08:00.692 12630.575 - 12690.153: 98.6966% ( 10) 00:08:00.692 12690.153 - 12749.731: 98.7751% ( 10) 00:08:00.692 12749.731 - 12809.309: 98.8222% ( 6) 00:08:00.692 12809.309 - 12868.887: 98.8379% ( 2) 00:08:00.692 12868.887 - 12928.465: 98.8536% ( 2) 00:08:00.692 12928.465 - 12988.044: 98.8615% ( 1) 00:08:00.692 12988.044 - 13047.622: 98.8772% ( 2) 00:08:00.692 13047.622 - 13107.200: 98.8929% ( 2) 00:08:00.692 13107.200 - 13166.778: 98.9008% ( 1) 00:08:00.692 13166.778 - 13226.356: 98.9165% ( 2) 00:08:00.692 13226.356 - 13285.935: 98.9322% ( 2) 00:08:00.692 13285.935 - 13345.513: 98.9557% ( 3) 00:08:00.692 13345.513 - 13405.091: 98.9714% ( 2) 00:08:00.692 13405.091 - 13464.669: 98.9871% ( 2) 00:08:00.692 13464.669 - 13524.247: 98.9950% ( 1) 00:08:00.692 18588.393 - 18707.549: 99.0264% ( 4) 00:08:00.692 18707.549 - 18826.705: 99.0892% ( 8) 00:08:00.692 18826.705 - 18945.862: 99.1599% ( 9) 00:08:00.692 18945.862 - 19065.018: 99.2227% ( 8) 00:08:00.692 19065.018 - 19184.175: 99.2541% ( 4) 00:08:00.692 19184.175 - 19303.331: 99.2855% ( 4) 00:08:00.692 19303.331 - 19422.487: 99.3090% ( 3) 00:08:00.692 19422.487 - 19541.644: 99.3326% ( 3) 00:08:00.692 19541.644 - 19660.800: 99.3640% ( 4) 00:08:00.692 19660.800 - 19779.956: 99.3876% ( 3) 00:08:00.692 19779.956 - 19899.113: 99.4190% ( 4) 00:08:00.692 19899.113 - 20018.269: 99.4582% ( 5) 00:08:00.692 20018.269 - 20137.425: 99.4818% ( 3) 00:08:00.692 20137.425 - 20256.582: 99.4975% ( 2) 00:08:00.692 24307.898 - 24427.055: 99.5132% ( 2) 00:08:00.692 25499.462 - 25618.618: 99.5289% ( 2) 00:08:00.692 25618.618 - 25737.775: 99.5603% ( 4) 00:08:00.692 25737.775 - 25856.931: 99.5917% ( 4) 00:08:00.692 25856.931 - 25976.087: 99.6310% ( 5) 00:08:00.692 25976.087 - 26095.244: 99.6545% ( 3) 00:08:00.692 26095.244 - 26214.400: 99.6938% ( 5) 00:08:00.692 26214.400 - 26333.556: 99.7252% ( 4) 00:08:00.692 26333.556 - 26452.713: 99.7566% ( 4) 00:08:00.692 26452.713 - 26571.869: 99.7959% ( 5) 00:08:00.692 26571.869 - 26691.025: 99.8273% ( 4) 00:08:00.692 26691.025 - 26810.182: 99.8665% ( 5) 00:08:00.692 26810.182 - 26929.338: 99.8979% ( 4) 00:08:00.692 26929.338 - 27048.495: 99.9372% ( 5) 00:08:00.692 27048.495 - 27167.651: 99.9686% ( 4) 00:08:00.692 27167.651 - 27286.807: 100.0000% ( 4) 00:08:00.692 00:08:00.692 08:08:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:00.692 00:08:00.692 real 0m2.731s 00:08:00.692 user 0m2.298s 00:08:00.692 sys 0m0.318s 00:08:00.692 08:08:05 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.692 08:08:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.692 ************************************ 00:08:00.692 END TEST nvme_perf 00:08:00.692 ************************************ 00:08:00.692 08:08:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:00.692 08:08:05 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:00.692 08:08:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.692 08:08:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.692 ************************************ 00:08:00.692 START TEST nvme_hello_world 00:08:00.692 ************************************ 00:08:00.692 08:08:05 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:00.952 Initializing NVMe Controllers 00:08:00.952 Attached to 0000:00:10.0 00:08:00.952 Namespace ID: 1 size: 6GB 00:08:00.952 Attached to 0000:00:11.0 00:08:00.952 Namespace ID: 1 size: 5GB 00:08:00.952 Attached to 0000:00:13.0 00:08:00.952 Namespace ID: 1 size: 1GB 00:08:00.952 Attached to 0000:00:12.0 00:08:00.952 Namespace ID: 1 size: 4GB 00:08:00.952 Namespace ID: 2 size: 4GB 00:08:00.952 Namespace ID: 3 size: 4GB 00:08:00.952 Initialization complete. 00:08:00.952 INFO: using host memory buffer for IO 00:08:00.952 Hello world! 00:08:00.952 INFO: using host memory buffer for IO 00:08:00.952 Hello world! 00:08:00.952 INFO: using host memory buffer for IO 00:08:00.952 Hello world! 00:08:00.952 INFO: using host memory buffer for IO 00:08:00.952 Hello world! 00:08:00.952 INFO: using host memory buffer for IO 00:08:00.952 Hello world! 00:08:00.952 INFO: using host memory buffer for IO 00:08:00.952 Hello world! 00:08:00.952 ************************************ 00:08:00.952 END TEST nvme_hello_world 00:08:00.952 ************************************ 00:08:00.952 00:08:00.952 real 0m0.277s 00:08:00.952 user 0m0.115s 00:08:00.952 sys 0m0.121s 00:08:00.952 08:08:05 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.952 08:08:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.952 08:08:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:00.952 08:08:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.952 08:08:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.952 08:08:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.952 ************************************ 00:08:00.952 START TEST nvme_sgl 00:08:00.952 ************************************ 00:08:00.952 08:08:05 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:01.211 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:01.211 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:01.211 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:01.211 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:01.211 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:01.211 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:01.211 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:01.211 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:01.211 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:01.471 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:01.471 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:01.471 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:01.471 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:01.471 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:01.471 NVMe Readv/Writev Request test 00:08:01.471 Attached to 0000:00:10.0 00:08:01.471 Attached to 0000:00:11.0 00:08:01.471 Attached to 0000:00:13.0 00:08:01.471 Attached to 0000:00:12.0 00:08:01.471 0000:00:10.0: build_io_request_2 test passed 00:08:01.471 0000:00:10.0: build_io_request_4 test passed 00:08:01.471 0000:00:10.0: build_io_request_5 test passed 00:08:01.471 0000:00:10.0: build_io_request_6 test passed 00:08:01.471 0000:00:10.0: build_io_request_7 test passed 00:08:01.471 0000:00:10.0: build_io_request_10 test passed 00:08:01.471 0000:00:11.0: build_io_request_2 test passed 00:08:01.471 0000:00:11.0: build_io_request_4 test passed 00:08:01.471 0000:00:11.0: build_io_request_5 test passed 00:08:01.471 0000:00:11.0: build_io_request_6 test passed 00:08:01.471 0000:00:11.0: build_io_request_7 test passed 00:08:01.471 0000:00:11.0: build_io_request_10 test passed 00:08:01.471 Cleaning up... 00:08:01.471 00:08:01.471 real 0m0.364s 00:08:01.471 user 0m0.201s 00:08:01.471 sys 0m0.121s 00:08:01.471 08:08:06 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.471 08:08:06 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:01.471 ************************************ 00:08:01.471 END TEST nvme_sgl 00:08:01.471 ************************************ 00:08:01.471 08:08:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:01.471 08:08:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.471 08:08:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.471 08:08:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.471 ************************************ 00:08:01.471 START TEST nvme_e2edp 00:08:01.471 ************************************ 00:08:01.471 08:08:06 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:01.731 NVMe Write/Read with End-to-End data protection test 00:08:01.731 Attached to 0000:00:10.0 00:08:01.731 Attached to 0000:00:11.0 00:08:01.731 Attached to 0000:00:13.0 00:08:01.731 Attached to 0000:00:12.0 00:08:01.731 Cleaning up... 00:08:01.731 00:08:01.731 real 0m0.311s 00:08:01.731 user 0m0.123s 00:08:01.731 sys 0m0.146s 00:08:01.731 08:08:06 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.731 08:08:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:01.731 ************************************ 00:08:01.731 END TEST nvme_e2edp 00:08:01.731 ************************************ 00:08:01.731 08:08:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:01.731 08:08:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.731 08:08:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.731 08:08:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.731 ************************************ 00:08:01.731 START TEST nvme_reserve 00:08:01.731 ************************************ 00:08:01.731 08:08:06 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:01.990 ===================================================== 00:08:01.990 NVMe Controller at PCI bus 0, device 16, function 0 00:08:01.990 ===================================================== 00:08:01.990 Reservations: Not Supported 00:08:01.990 ===================================================== 00:08:01.990 NVMe Controller at PCI bus 0, device 17, function 0 00:08:01.990 ===================================================== 00:08:01.990 Reservations: Not Supported 00:08:01.990 ===================================================== 00:08:01.990 NVMe Controller at PCI bus 0, device 19, function 0 00:08:01.990 ===================================================== 00:08:01.990 Reservations: Not Supported 00:08:01.990 ===================================================== 00:08:01.990 NVMe Controller at PCI bus 0, device 18, function 0 00:08:01.990 ===================================================== 00:08:01.990 Reservations: Not Supported 00:08:01.990 Reservation test passed 00:08:01.990 00:08:01.990 real 0m0.293s 00:08:01.990 user 0m0.112s 00:08:01.990 sys 0m0.144s 00:08:01.990 ************************************ 00:08:01.990 END TEST nvme_reserve 00:08:01.990 ************************************ 00:08:01.990 08:08:06 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.990 08:08:06 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:02.249 08:08:07 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:02.249 08:08:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.249 08:08:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.249 08:08:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.249 ************************************ 00:08:02.249 START TEST nvme_err_injection 00:08:02.249 ************************************ 00:08:02.249 08:08:07 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:02.508 NVMe Error Injection test 00:08:02.508 Attached to 0000:00:10.0 00:08:02.508 Attached to 0000:00:11.0 00:08:02.508 Attached to 0000:00:13.0 00:08:02.508 Attached to 0000:00:12.0 00:08:02.508 0000:00:13.0: get features failed as expected 00:08:02.508 0000:00:12.0: get features failed as expected 00:08:02.508 0000:00:10.0: get features failed as expected 00:08:02.508 0000:00:11.0: get features failed as expected 00:08:02.508 0000:00:10.0: get features successfully as expected 00:08:02.508 0000:00:11.0: get features successfully as expected 00:08:02.508 0000:00:13.0: get features successfully as expected 00:08:02.508 0000:00:12.0: get features successfully as expected 00:08:02.508 0000:00:10.0: read failed as expected 00:08:02.508 0000:00:11.0: read failed as expected 00:08:02.508 0000:00:13.0: read failed as expected 00:08:02.508 0000:00:12.0: read failed as expected 00:08:02.508 0000:00:10.0: read successfully as expected 00:08:02.508 0000:00:11.0: read successfully as expected 00:08:02.508 0000:00:13.0: read successfully as expected 00:08:02.508 0000:00:12.0: read successfully as expected 00:08:02.508 Cleaning up... 00:08:02.508 00:08:02.508 real 0m0.325s 00:08:02.508 user 0m0.144s 00:08:02.508 sys 0m0.137s 00:08:02.508 ************************************ 00:08:02.508 END TEST nvme_err_injection 00:08:02.508 ************************************ 00:08:02.508 08:08:07 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.508 08:08:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:02.508 08:08:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:02.508 08:08:07 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:02.508 08:08:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.508 08:08:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.508 ************************************ 00:08:02.508 START TEST nvme_overhead 00:08:02.508 ************************************ 00:08:02.508 08:08:07 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:03.888 Initializing NVMe Controllers 00:08:03.888 Attached to 0000:00:10.0 00:08:03.888 Attached to 0000:00:11.0 00:08:03.888 Attached to 0000:00:13.0 00:08:03.888 Attached to 0000:00:12.0 00:08:03.888 Initialization complete. Launching workers. 00:08:03.888 submit (in ns) avg, min, max = 16656.9, 12820.9, 121629.1 00:08:03.888 complete (in ns) avg, min, max = 12559.6, 8456.4, 100007.7 00:08:03.888 00:08:03.888 Submit histogram 00:08:03.888 ================ 00:08:03.888 Range in us Cumulative Count 00:08:03.888 12.800 - 12.858: 0.0121% ( 1) 00:08:03.888 12.916 - 12.975: 0.0241% ( 1) 00:08:03.888 12.975 - 13.033: 0.1328% ( 9) 00:08:03.888 13.033 - 13.091: 0.6398% ( 42) 00:08:03.888 13.091 - 13.149: 2.2091% ( 130) 00:08:03.888 13.149 - 13.207: 6.0357% ( 317) 00:08:03.888 13.207 - 13.265: 11.2023% ( 428) 00:08:03.888 13.265 - 13.324: 16.5017% ( 439) 00:08:03.888 13.324 - 13.382: 20.2318% ( 309) 00:08:03.888 13.382 - 13.440: 23.5031% ( 271) 00:08:03.888 13.440 - 13.498: 26.7262% ( 267) 00:08:03.888 13.498 - 13.556: 31.4341% ( 390) 00:08:03.888 13.556 - 13.615: 36.7093% ( 437) 00:08:03.888 13.615 - 13.673: 41.1758% ( 370) 00:08:03.888 13.673 - 13.731: 44.6886% ( 291) 00:08:03.888 13.731 - 13.789: 46.9580% ( 188) 00:08:03.888 13.789 - 13.847: 48.8291% ( 155) 00:08:03.888 13.847 - 13.905: 50.4466% ( 134) 00:08:03.888 13.905 - 13.964: 52.0521% ( 133) 00:08:03.888 13.964 - 14.022: 53.6697% ( 134) 00:08:03.888 14.022 - 14.080: 54.9010% ( 102) 00:08:03.888 14.080 - 14.138: 55.8909% ( 82) 00:08:03.889 14.138 - 14.196: 56.5427% ( 54) 00:08:03.889 14.196 - 14.255: 57.2670% ( 60) 00:08:03.889 14.255 - 14.313: 57.9672% ( 58) 00:08:03.889 14.313 - 14.371: 58.7277% ( 63) 00:08:03.889 14.371 - 14.429: 59.2226% ( 41) 00:08:03.889 14.429 - 14.487: 59.8624% ( 53) 00:08:03.889 14.487 - 14.545: 60.3452% ( 40) 00:08:03.889 14.545 - 14.604: 60.8522% ( 42) 00:08:03.889 14.604 - 14.662: 61.3834% ( 44) 00:08:03.889 14.662 - 14.720: 62.1197% ( 61) 00:08:03.889 14.720 - 14.778: 62.9648% ( 70) 00:08:03.889 14.778 - 14.836: 63.5804% ( 51) 00:08:03.889 14.836 - 14.895: 64.0633% ( 40) 00:08:03.889 14.895 - 15.011: 64.6910% ( 52) 00:08:03.889 15.011 - 15.127: 65.1255% ( 36) 00:08:03.889 15.127 - 15.244: 65.8619% ( 61) 00:08:03.889 15.244 - 15.360: 66.2603% ( 33) 00:08:03.889 15.360 - 15.476: 66.3810% ( 10) 00:08:03.889 15.476 - 15.593: 66.5620% ( 15) 00:08:03.889 15.593 - 15.709: 66.7310% ( 14) 00:08:03.889 15.709 - 15.825: 66.7793% ( 4) 00:08:03.889 15.825 - 15.942: 66.8035% ( 2) 00:08:03.889 15.942 - 16.058: 66.8518% ( 4) 00:08:03.889 16.058 - 16.175: 66.8638% ( 1) 00:08:03.889 16.175 - 16.291: 66.9121% ( 4) 00:08:03.889 16.291 - 16.407: 66.9363% ( 2) 00:08:03.889 16.407 - 16.524: 66.9845% ( 4) 00:08:03.889 16.524 - 16.640: 66.9966% ( 1) 00:08:03.889 16.873 - 16.989: 67.0087% ( 1) 00:08:03.889 16.989 - 17.105: 67.0208% ( 1) 00:08:03.889 17.105 - 17.222: 67.0811% ( 5) 00:08:03.889 17.222 - 17.338: 67.6485% ( 47) 00:08:03.889 17.338 - 17.455: 70.9802% ( 276) 00:08:03.889 17.455 - 17.571: 76.0019% ( 416) 00:08:03.889 17.571 - 17.687: 78.8267% ( 234) 00:08:03.889 17.687 - 17.804: 79.9614% ( 94) 00:08:03.889 17.804 - 17.920: 80.7943% ( 69) 00:08:03.889 17.920 - 18.036: 81.4703% ( 56) 00:08:03.889 18.036 - 18.153: 82.0135% ( 45) 00:08:03.889 18.153 - 18.269: 82.5085% ( 41) 00:08:03.889 18.269 - 18.385: 82.9310% ( 35) 00:08:03.889 18.385 - 18.502: 83.3655% ( 36) 00:08:03.889 18.502 - 18.618: 83.6190% ( 21) 00:08:03.889 18.618 - 18.735: 83.7760% ( 13) 00:08:03.889 18.735 - 18.851: 83.9932% ( 18) 00:08:03.889 18.851 - 18.967: 84.2226% ( 19) 00:08:03.889 18.967 - 19.084: 84.5123% ( 24) 00:08:03.889 19.084 - 19.200: 84.7537% ( 20) 00:08:03.889 19.200 - 19.316: 84.9107% ( 13) 00:08:03.889 19.316 - 19.433: 85.0917% ( 15) 00:08:03.889 19.433 - 19.549: 85.2487% ( 13) 00:08:03.889 19.549 - 19.665: 85.3935% ( 12) 00:08:03.889 19.665 - 19.782: 85.4418% ( 4) 00:08:03.889 19.782 - 19.898: 85.5384% ( 8) 00:08:03.889 19.898 - 20.015: 85.6108% ( 6) 00:08:03.889 20.015 - 20.131: 85.7195% ( 9) 00:08:03.889 20.131 - 20.247: 85.8522% ( 11) 00:08:03.889 20.247 - 20.364: 85.9367% ( 7) 00:08:03.889 20.364 - 20.480: 85.9850% ( 4) 00:08:03.889 20.480 - 20.596: 86.0816% ( 8) 00:08:03.889 20.596 - 20.713: 86.1299% ( 4) 00:08:03.889 20.713 - 20.829: 86.1782% ( 4) 00:08:03.889 20.829 - 20.945: 86.2747% ( 8) 00:08:03.889 20.945 - 21.062: 86.3351% ( 5) 00:08:03.889 21.062 - 21.178: 86.4075% ( 6) 00:08:03.889 21.178 - 21.295: 86.5524% ( 12) 00:08:03.889 21.295 - 21.411: 86.5886% ( 3) 00:08:03.889 21.411 - 21.527: 86.6610% ( 6) 00:08:03.889 21.527 - 21.644: 86.7093% ( 4) 00:08:03.889 21.644 - 21.760: 86.7214% ( 1) 00:08:03.889 21.760 - 21.876: 86.7697% ( 4) 00:08:03.889 21.876 - 21.993: 86.7938% ( 2) 00:08:03.889 21.993 - 22.109: 86.8421% ( 4) 00:08:03.889 22.109 - 22.225: 86.8662% ( 2) 00:08:03.889 22.225 - 22.342: 86.9145% ( 4) 00:08:03.889 22.342 - 22.458: 86.9749% ( 5) 00:08:03.889 22.458 - 22.575: 87.0232% ( 4) 00:08:03.889 22.575 - 22.691: 87.0835% ( 5) 00:08:03.889 22.691 - 22.807: 87.1318% ( 4) 00:08:03.889 22.807 - 22.924: 87.1560% ( 2) 00:08:03.889 22.924 - 23.040: 87.1680% ( 1) 00:08:03.889 23.040 - 23.156: 87.2284% ( 5) 00:08:03.889 23.156 - 23.273: 87.3008% ( 6) 00:08:03.889 23.273 - 23.389: 87.3250% ( 2) 00:08:03.889 23.389 - 23.505: 87.3732% ( 4) 00:08:03.889 23.505 - 23.622: 87.3974% ( 2) 00:08:03.889 23.622 - 23.738: 87.4457% ( 4) 00:08:03.889 23.738 - 23.855: 87.4940% ( 4) 00:08:03.889 23.855 - 23.971: 87.5543% ( 5) 00:08:03.889 23.971 - 24.087: 87.6026% ( 4) 00:08:03.889 24.087 - 24.204: 87.6388% ( 3) 00:08:03.889 24.204 - 24.320: 87.6509% ( 1) 00:08:03.889 24.320 - 24.436: 87.6992% ( 4) 00:08:03.889 24.436 - 24.553: 87.7354% ( 3) 00:08:03.889 24.553 - 24.669: 87.7958% ( 5) 00:08:03.889 24.669 - 24.785: 87.8682% ( 6) 00:08:03.889 24.785 - 24.902: 87.9285% ( 5) 00:08:03.889 24.902 - 25.018: 87.9527% ( 2) 00:08:03.889 25.018 - 25.135: 88.0372% ( 7) 00:08:03.889 25.135 - 25.251: 88.0493% ( 1) 00:08:03.889 25.251 - 25.367: 88.1096% ( 5) 00:08:03.889 25.367 - 25.484: 88.1820% ( 6) 00:08:03.889 25.484 - 25.600: 88.2665% ( 7) 00:08:03.889 25.600 - 25.716: 88.3028% ( 3) 00:08:03.889 25.716 - 25.833: 88.3631% ( 5) 00:08:03.889 25.833 - 25.949: 88.3873% ( 2) 00:08:03.889 25.949 - 26.065: 88.3993% ( 1) 00:08:03.889 26.065 - 26.182: 88.4476% ( 4) 00:08:03.889 26.182 - 26.298: 88.4597% ( 1) 00:08:03.889 26.298 - 26.415: 88.4838% ( 2) 00:08:03.889 26.415 - 26.531: 88.4959% ( 1) 00:08:03.889 26.531 - 26.647: 88.5080% ( 1) 00:08:03.889 26.647 - 26.764: 88.5321% ( 2) 00:08:03.889 26.764 - 26.880: 88.6408% ( 9) 00:08:03.889 26.880 - 26.996: 88.7011% ( 5) 00:08:03.889 26.996 - 27.113: 88.7373% ( 3) 00:08:03.889 27.113 - 27.229: 88.7856% ( 4) 00:08:03.889 27.229 - 27.345: 88.8098% ( 2) 00:08:03.889 27.345 - 27.462: 88.8580% ( 4) 00:08:03.889 27.462 - 27.578: 88.9063% ( 4) 00:08:03.889 27.578 - 27.695: 88.9425% ( 3) 00:08:03.889 27.695 - 27.811: 88.9788% ( 3) 00:08:03.889 27.811 - 27.927: 89.0391% ( 5) 00:08:03.889 27.927 - 28.044: 89.0753% ( 3) 00:08:03.889 28.044 - 28.160: 89.1236% ( 4) 00:08:03.889 28.160 - 28.276: 89.2323% ( 9) 00:08:03.889 28.276 - 28.393: 89.4858% ( 21) 00:08:03.889 28.393 - 28.509: 89.9807% ( 41) 00:08:03.889 28.509 - 28.625: 90.5601% ( 48) 00:08:03.889 28.625 - 28.742: 91.2965% ( 61) 00:08:03.889 28.742 - 28.858: 92.5278% ( 102) 00:08:03.889 28.858 - 28.975: 93.8798% ( 112) 00:08:03.889 28.975 - 29.091: 94.9300% ( 87) 00:08:03.889 29.091 - 29.207: 95.7991% ( 72) 00:08:03.889 29.207 - 29.324: 96.4389% ( 53) 00:08:03.889 29.324 - 29.440: 97.0183% ( 48) 00:08:03.889 29.440 - 29.556: 97.3563% ( 28) 00:08:03.889 29.556 - 29.673: 97.5736% ( 18) 00:08:03.889 29.673 - 29.789: 97.7426% ( 14) 00:08:03.889 29.789 - 30.022: 98.1048% ( 30) 00:08:03.889 30.022 - 30.255: 98.3100% ( 17) 00:08:03.889 30.255 - 30.487: 98.3824% ( 6) 00:08:03.889 30.487 - 30.720: 98.4307% ( 4) 00:08:03.890 30.720 - 30.953: 98.4549% ( 2) 00:08:03.890 30.953 - 31.185: 98.4669% ( 1) 00:08:03.890 31.185 - 31.418: 98.4790% ( 1) 00:08:03.890 31.418 - 31.651: 98.5031% ( 2) 00:08:03.890 31.651 - 31.884: 98.5635% ( 5) 00:08:03.890 31.884 - 32.116: 98.5997% ( 3) 00:08:03.890 32.116 - 32.349: 98.6239% ( 2) 00:08:03.890 32.349 - 32.582: 98.6480% ( 2) 00:08:03.890 32.582 - 32.815: 98.6601% ( 1) 00:08:03.890 33.047 - 33.280: 98.6721% ( 1) 00:08:03.890 33.280 - 33.513: 98.6963% ( 2) 00:08:03.890 33.513 - 33.745: 98.7566% ( 5) 00:08:03.890 33.745 - 33.978: 98.8411% ( 7) 00:08:03.890 33.978 - 34.211: 98.8532% ( 1) 00:08:03.890 34.211 - 34.444: 98.9015% ( 4) 00:08:03.890 34.444 - 34.676: 98.9498% ( 4) 00:08:03.890 34.676 - 34.909: 99.0101% ( 5) 00:08:03.890 34.909 - 35.142: 99.0584% ( 4) 00:08:03.890 35.142 - 35.375: 99.1067% ( 4) 00:08:03.890 35.375 - 35.607: 99.1429% ( 3) 00:08:03.890 35.607 - 35.840: 99.1912% ( 4) 00:08:03.890 35.840 - 36.073: 99.2636% ( 6) 00:08:03.890 36.073 - 36.305: 99.3240% ( 5) 00:08:03.890 36.305 - 36.538: 99.3723% ( 4) 00:08:03.890 36.538 - 36.771: 99.4085% ( 3) 00:08:03.890 36.771 - 37.004: 99.4206% ( 1) 00:08:03.890 37.004 - 37.236: 99.4326% ( 1) 00:08:03.890 37.236 - 37.469: 99.4568% ( 2) 00:08:03.890 37.469 - 37.702: 99.4809% ( 2) 00:08:03.890 37.935 - 38.167: 99.5051% ( 2) 00:08:03.890 38.167 - 38.400: 99.5292% ( 2) 00:08:03.890 38.400 - 38.633: 99.5413% ( 1) 00:08:03.890 38.633 - 38.865: 99.5534% ( 1) 00:08:03.890 39.796 - 40.029: 99.5654% ( 1) 00:08:03.890 40.262 - 40.495: 99.5775% ( 1) 00:08:03.890 40.495 - 40.727: 99.6016% ( 2) 00:08:03.890 40.727 - 40.960: 99.6258% ( 2) 00:08:03.890 41.658 - 41.891: 99.6379% ( 1) 00:08:03.890 43.287 - 43.520: 99.6499% ( 1) 00:08:03.890 43.520 - 43.753: 99.6620% ( 1) 00:08:03.890 43.753 - 43.985: 99.6861% ( 2) 00:08:03.890 43.985 - 44.218: 99.6982% ( 1) 00:08:03.890 44.218 - 44.451: 99.7465% ( 4) 00:08:03.890 44.451 - 44.684: 99.7586% ( 1) 00:08:03.890 44.684 - 44.916: 99.7827% ( 2) 00:08:03.890 44.916 - 45.149: 99.8069% ( 2) 00:08:03.890 45.149 - 45.382: 99.8189% ( 1) 00:08:03.890 45.382 - 45.615: 99.8310% ( 1) 00:08:03.890 45.847 - 46.080: 99.8431% ( 1) 00:08:03.890 47.942 - 48.175: 99.8551% ( 1) 00:08:03.890 49.338 - 49.571: 99.8793% ( 2) 00:08:03.890 50.502 - 50.735: 99.8914% ( 1) 00:08:03.890 50.735 - 50.967: 99.9034% ( 1) 00:08:03.890 52.131 - 52.364: 99.9155% ( 1) 00:08:03.890 52.364 - 52.596: 99.9276% ( 1) 00:08:03.890 55.855 - 56.087: 99.9396% ( 1) 00:08:03.890 68.887 - 69.353: 99.9517% ( 1) 00:08:03.890 78.196 - 78.662: 99.9638% ( 1) 00:08:03.890 83.316 - 83.782: 99.9759% ( 1) 00:08:03.890 101.004 - 101.469: 99.9879% ( 1) 00:08:03.890 121.018 - 121.949: 100.0000% ( 1) 00:08:03.890 00:08:03.890 Complete histogram 00:08:03.890 ================== 00:08:03.890 Range in us Cumulative Count 00:08:03.890 8.436 - 8.495: 0.0483% ( 4) 00:08:03.890 8.495 - 8.553: 0.1690% ( 10) 00:08:03.890 8.553 - 8.611: 0.5191% ( 29) 00:08:03.890 8.611 - 8.669: 1.5814% ( 88) 00:08:03.890 8.669 - 8.727: 3.1748% ( 132) 00:08:03.890 8.727 - 8.785: 4.6837% ( 125) 00:08:03.890 8.785 - 8.844: 6.3013% ( 134) 00:08:03.890 8.844 - 8.902: 8.7760% ( 205) 00:08:03.890 8.902 - 8.960: 13.1700% ( 364) 00:08:03.890 8.960 - 9.018: 18.5297% ( 444) 00:08:03.890 9.018 - 9.076: 24.1309% ( 464) 00:08:03.890 9.076 - 9.135: 28.5732% ( 368) 00:08:03.890 9.135 - 9.193: 32.5447% ( 329) 00:08:03.890 9.193 - 9.251: 36.8300% ( 355) 00:08:03.890 9.251 - 9.309: 40.6205% ( 314) 00:08:03.890 9.309 - 9.367: 43.6625% ( 252) 00:08:03.890 9.367 - 9.425: 45.0748% ( 117) 00:08:03.890 9.425 - 9.484: 46.5113% ( 119) 00:08:03.890 9.484 - 9.542: 47.5857% ( 89) 00:08:03.890 9.542 - 9.600: 48.9256% ( 111) 00:08:03.890 9.600 - 9.658: 50.2776% ( 112) 00:08:03.890 9.658 - 9.716: 51.0864% ( 67) 00:08:03.890 9.716 - 9.775: 52.0280% ( 78) 00:08:03.890 9.775 - 9.833: 53.3800% ( 112) 00:08:03.890 9.833 - 9.891: 54.6717% ( 107) 00:08:03.890 9.891 - 9.949: 55.8547% ( 98) 00:08:03.890 9.949 - 10.007: 56.7962% ( 78) 00:08:03.890 10.007 - 10.065: 57.4964% ( 58) 00:08:03.890 10.065 - 10.124: 57.9913% ( 41) 00:08:03.890 10.124 - 10.182: 58.3052% ( 26) 00:08:03.890 10.182 - 10.240: 58.5949% ( 24) 00:08:03.890 10.240 - 10.298: 58.7639% ( 14) 00:08:03.890 10.298 - 10.356: 58.9932% ( 19) 00:08:03.890 10.356 - 10.415: 59.1140% ( 10) 00:08:03.890 10.415 - 10.473: 59.2467% ( 11) 00:08:03.890 10.473 - 10.531: 59.3675% ( 10) 00:08:03.890 10.531 - 10.589: 59.5727% ( 17) 00:08:03.890 10.589 - 10.647: 59.7900% ( 18) 00:08:03.890 10.647 - 10.705: 60.1400% ( 29) 00:08:03.890 10.705 - 10.764: 60.5867% ( 37) 00:08:03.890 10.764 - 10.822: 60.9126% ( 27) 00:08:03.890 10.822 - 10.880: 61.1902% ( 23) 00:08:03.890 10.880 - 10.938: 61.5162% ( 27) 00:08:03.890 10.938 - 10.996: 61.6610% ( 12) 00:08:03.890 10.996 - 11.055: 61.8662% ( 17) 00:08:03.890 11.055 - 11.113: 62.0111% ( 12) 00:08:03.890 11.113 - 11.171: 62.1922% ( 15) 00:08:03.890 11.171 - 11.229: 62.3008% ( 9) 00:08:03.890 11.229 - 11.287: 62.4215% ( 10) 00:08:03.890 11.287 - 11.345: 62.5543% ( 11) 00:08:03.890 11.345 - 11.404: 62.6630% ( 9) 00:08:03.890 11.404 - 11.462: 62.8078% ( 12) 00:08:03.890 11.462 - 11.520: 62.8682% ( 5) 00:08:03.890 11.520 - 11.578: 62.9165% ( 4) 00:08:03.890 11.578 - 11.636: 62.9406% ( 2) 00:08:03.890 11.636 - 11.695: 63.1579% ( 18) 00:08:03.890 11.695 - 11.753: 63.7977% ( 53) 00:08:03.890 11.753 - 11.811: 65.7170% ( 159) 00:08:03.890 11.811 - 11.869: 68.8798% ( 262) 00:08:03.890 11.869 - 11.927: 72.3081% ( 284) 00:08:03.890 11.927 - 11.985: 75.0000% ( 223) 00:08:03.890 11.985 - 12.044: 76.4244% ( 118) 00:08:03.890 12.044 - 12.102: 77.0159% ( 49) 00:08:03.890 12.102 - 12.160: 77.3419% ( 27) 00:08:03.890 12.160 - 12.218: 77.4988% ( 13) 00:08:03.890 12.218 - 12.276: 77.7040% ( 17) 00:08:03.890 12.276 - 12.335: 77.8730% ( 14) 00:08:03.890 12.335 - 12.393: 78.0179% ( 12) 00:08:03.890 12.393 - 12.451: 78.1507% ( 11) 00:08:03.890 12.451 - 12.509: 78.3559% ( 17) 00:08:03.890 12.509 - 12.567: 78.5973% ( 20) 00:08:03.890 12.567 - 12.625: 78.8629% ( 22) 00:08:03.890 12.625 - 12.684: 79.1526% ( 24) 00:08:03.890 12.684 - 12.742: 79.3819% ( 19) 00:08:03.890 12.742 - 12.800: 79.6596% ( 23) 00:08:03.890 12.800 - 12.858: 79.8889% ( 19) 00:08:03.890 12.858 - 12.916: 79.9976% ( 9) 00:08:03.890 12.916 - 12.975: 80.1666% ( 14) 00:08:03.890 12.975 - 13.033: 80.3356% ( 14) 00:08:03.890 13.033 - 13.091: 80.5287% ( 16) 00:08:03.890 13.091 - 13.149: 80.6736% ( 12) 00:08:03.890 13.149 - 13.207: 80.7460% ( 6) 00:08:03.890 13.207 - 13.265: 80.7822% ( 3) 00:08:03.890 13.265 - 13.324: 80.8064% ( 2) 00:08:03.890 13.324 - 13.382: 80.8547% ( 4) 00:08:03.890 13.382 - 13.440: 80.8788% ( 2) 00:08:03.890 13.440 - 13.498: 80.8909% ( 1) 00:08:03.890 13.498 - 13.556: 80.9271% ( 3) 00:08:03.890 13.556 - 13.615: 80.9754% ( 4) 00:08:03.890 13.615 - 13.673: 80.9995% ( 2) 00:08:03.890 13.673 - 13.731: 81.0237% ( 2) 00:08:03.890 13.789 - 13.847: 81.0599% ( 3) 00:08:03.890 13.847 - 13.905: 81.1082% ( 4) 00:08:03.890 13.905 - 13.964: 81.1564% ( 4) 00:08:03.890 13.964 - 14.022: 81.1685% ( 1) 00:08:03.890 14.022 - 14.080: 81.1927% ( 2) 00:08:03.890 14.080 - 14.138: 81.2530% ( 5) 00:08:03.890 14.138 - 14.196: 81.3013% ( 4) 00:08:03.891 14.196 - 14.255: 81.3254% ( 2) 00:08:03.891 14.255 - 14.313: 81.3737% ( 4) 00:08:03.891 14.313 - 14.371: 81.4220% ( 4) 00:08:03.891 14.371 - 14.429: 81.4462% ( 2) 00:08:03.891 14.429 - 14.487: 81.4944% ( 4) 00:08:03.891 14.487 - 14.545: 81.5910% ( 8) 00:08:03.891 14.545 - 14.604: 81.6514% ( 5) 00:08:03.891 14.604 - 14.662: 81.7721% ( 10) 00:08:03.891 14.662 - 14.720: 81.8687% ( 8) 00:08:03.891 14.720 - 14.778: 81.8928% ( 2) 00:08:03.891 14.778 - 14.836: 81.9169% ( 2) 00:08:03.891 14.836 - 14.895: 81.9652% ( 4) 00:08:03.891 14.895 - 15.011: 82.0497% ( 7) 00:08:03.891 15.011 - 15.127: 82.1584% ( 9) 00:08:03.891 15.127 - 15.244: 82.2187% ( 5) 00:08:03.891 15.244 - 15.360: 82.3032% ( 7) 00:08:03.891 15.360 - 15.476: 82.3998% ( 8) 00:08:03.891 15.476 - 15.593: 82.4481% ( 4) 00:08:03.891 15.593 - 15.709: 82.5809% ( 11) 00:08:03.891 15.709 - 15.825: 82.6533% ( 6) 00:08:03.891 15.825 - 15.942: 82.7378% ( 7) 00:08:03.891 15.942 - 16.058: 82.7861% ( 4) 00:08:03.891 16.058 - 16.175: 82.8223% ( 3) 00:08:03.891 16.175 - 16.291: 82.9189% ( 8) 00:08:03.891 16.291 - 16.407: 82.9913% ( 6) 00:08:03.891 16.407 - 16.524: 83.0396% ( 4) 00:08:03.891 16.524 - 16.640: 83.0758% ( 3) 00:08:03.891 16.640 - 16.756: 83.1000% ( 2) 00:08:03.891 16.756 - 16.873: 83.1362% ( 3) 00:08:03.891 16.873 - 16.989: 83.1845% ( 4) 00:08:03.891 16.989 - 17.105: 83.2086% ( 2) 00:08:03.891 17.105 - 17.222: 83.2327% ( 2) 00:08:03.891 17.222 - 17.338: 83.2448% ( 1) 00:08:03.891 17.338 - 17.455: 83.2931% ( 4) 00:08:03.891 17.455 - 17.571: 83.3052% ( 1) 00:08:03.891 17.687 - 17.804: 83.3776% ( 6) 00:08:03.891 17.804 - 17.920: 83.4259% ( 4) 00:08:03.891 18.036 - 18.153: 83.4742% ( 4) 00:08:03.891 18.153 - 18.269: 83.4983% ( 2) 00:08:03.891 18.269 - 18.385: 83.5345% ( 3) 00:08:03.891 18.385 - 18.502: 83.5949% ( 5) 00:08:03.891 18.502 - 18.618: 83.6190% ( 2) 00:08:03.891 18.618 - 18.735: 83.6552% ( 3) 00:08:03.891 18.735 - 18.851: 83.6794% ( 2) 00:08:03.891 18.851 - 18.967: 83.7156% ( 3) 00:08:03.891 19.084 - 19.200: 83.7639% ( 4) 00:08:03.891 19.200 - 19.316: 83.7760% ( 1) 00:08:03.891 19.316 - 19.433: 83.7880% ( 1) 00:08:03.891 19.433 - 19.549: 83.8242% ( 3) 00:08:03.891 19.549 - 19.665: 83.8725% ( 4) 00:08:03.891 19.782 - 19.898: 83.8846% ( 1) 00:08:03.891 19.898 - 20.015: 83.9329% ( 4) 00:08:03.891 20.015 - 20.131: 83.9570% ( 2) 00:08:03.891 20.131 - 20.247: 83.9691% ( 1) 00:08:03.891 20.247 - 20.364: 84.0415% ( 6) 00:08:03.891 20.364 - 20.480: 84.0536% ( 1) 00:08:03.891 20.480 - 20.596: 84.0777% ( 2) 00:08:03.891 20.596 - 20.713: 84.1019% ( 2) 00:08:03.891 20.713 - 20.829: 84.1381% ( 3) 00:08:03.891 20.829 - 20.945: 84.1985% ( 5) 00:08:03.891 20.945 - 21.062: 84.2467% ( 4) 00:08:03.891 21.062 - 21.178: 84.3071% ( 5) 00:08:03.891 21.178 - 21.295: 84.3433% ( 3) 00:08:03.891 21.295 - 21.411: 84.3795% ( 3) 00:08:03.891 21.411 - 21.527: 84.4520% ( 6) 00:08:03.891 21.527 - 21.644: 84.5002% ( 4) 00:08:03.891 21.644 - 21.760: 84.5606% ( 5) 00:08:03.891 21.760 - 21.876: 84.5968% ( 3) 00:08:03.891 21.876 - 21.993: 84.6330% ( 3) 00:08:03.891 21.993 - 22.109: 84.6451% ( 1) 00:08:03.891 22.109 - 22.225: 84.6572% ( 1) 00:08:03.891 22.342 - 22.458: 84.6692% ( 1) 00:08:03.891 22.458 - 22.575: 84.6934% ( 2) 00:08:03.891 22.691 - 22.807: 84.7055% ( 1) 00:08:03.891 22.924 - 23.040: 84.7296% ( 2) 00:08:03.891 23.156 - 23.273: 84.7537% ( 2) 00:08:03.891 23.273 - 23.389: 84.8141% ( 5) 00:08:03.891 23.389 - 23.505: 84.8503% ( 3) 00:08:03.891 23.505 - 23.622: 84.9348% ( 7) 00:08:03.891 23.622 - 23.738: 85.1642% ( 19) 00:08:03.891 23.738 - 23.855: 85.5746% ( 34) 00:08:03.891 23.855 - 23.971: 86.0937% ( 43) 00:08:03.891 23.971 - 24.087: 86.9025% ( 67) 00:08:03.891 24.087 - 24.204: 88.4476% ( 128) 00:08:03.891 24.204 - 24.320: 90.3308% ( 156) 00:08:03.891 24.320 - 24.436: 91.8155% ( 123) 00:08:03.891 24.436 - 24.553: 93.2762% ( 121) 00:08:03.891 24.553 - 24.669: 94.7006% ( 118) 00:08:03.891 24.669 - 24.785: 95.7991% ( 91) 00:08:03.891 24.785 - 24.902: 96.5113% ( 59) 00:08:03.891 24.902 - 25.018: 97.1028% ( 49) 00:08:03.891 25.018 - 25.135: 97.5254% ( 35) 00:08:03.891 25.135 - 25.251: 97.8151% ( 24) 00:08:03.891 25.251 - 25.367: 97.9720% ( 13) 00:08:03.891 25.367 - 25.484: 98.0927% ( 10) 00:08:03.891 25.484 - 25.600: 98.1531% ( 5) 00:08:03.891 25.600 - 25.716: 98.2255% ( 6) 00:08:03.891 25.716 - 25.833: 98.2979% ( 6) 00:08:03.891 25.833 - 25.949: 98.3341% ( 3) 00:08:03.891 25.949 - 26.065: 98.3583% ( 2) 00:08:03.891 26.065 - 26.182: 98.3945% ( 3) 00:08:03.891 26.182 - 26.298: 98.4186% ( 2) 00:08:03.891 26.298 - 26.415: 98.4549% ( 3) 00:08:03.891 26.415 - 26.531: 98.4669% ( 1) 00:08:03.891 26.531 - 26.647: 98.4911% ( 2) 00:08:03.891 26.647 - 26.764: 98.5152% ( 2) 00:08:03.891 26.764 - 26.880: 98.5514% ( 3) 00:08:03.891 26.880 - 26.996: 98.5756% ( 2) 00:08:03.891 26.996 - 27.113: 98.5876% ( 1) 00:08:03.891 27.113 - 27.229: 98.5997% ( 1) 00:08:03.891 27.229 - 27.345: 98.6118% ( 1) 00:08:03.891 27.462 - 27.578: 98.6239% ( 1) 00:08:03.891 27.811 - 27.927: 98.6601% ( 3) 00:08:03.891 27.927 - 28.044: 98.6721% ( 1) 00:08:03.891 28.044 - 28.160: 98.6842% ( 1) 00:08:03.891 28.160 - 28.276: 98.7084% ( 2) 00:08:03.891 28.625 - 28.742: 98.7204% ( 1) 00:08:03.891 28.742 - 28.858: 98.7325% ( 1) 00:08:03.891 29.324 - 29.440: 98.7446% ( 1) 00:08:03.891 29.440 - 29.556: 98.7687% ( 2) 00:08:03.891 29.556 - 29.673: 98.7808% ( 1) 00:08:03.891 29.673 - 29.789: 98.8049% ( 2) 00:08:03.891 29.789 - 30.022: 98.8653% ( 5) 00:08:03.891 30.022 - 30.255: 98.9739% ( 9) 00:08:03.891 30.255 - 30.487: 99.0946% ( 10) 00:08:03.891 30.487 - 30.720: 99.1309% ( 3) 00:08:03.891 30.720 - 30.953: 99.1791% ( 4) 00:08:03.891 30.953 - 31.185: 99.2395% ( 5) 00:08:03.891 31.185 - 31.418: 99.3119% ( 6) 00:08:03.891 31.418 - 31.651: 99.3844% ( 6) 00:08:03.891 31.651 - 31.884: 99.4568% ( 6) 00:08:03.891 31.884 - 32.116: 99.5171% ( 5) 00:08:03.891 32.116 - 32.349: 99.5292% ( 1) 00:08:03.891 32.349 - 32.582: 99.5775% ( 4) 00:08:03.891 33.047 - 33.280: 99.5896% ( 1) 00:08:03.891 33.280 - 33.513: 99.6137% ( 2) 00:08:03.891 33.513 - 33.745: 99.6258% ( 1) 00:08:03.891 33.745 - 33.978: 99.6379% ( 1) 00:08:03.891 33.978 - 34.211: 99.6620% ( 2) 00:08:03.891 34.909 - 35.142: 99.6741% ( 1) 00:08:03.891 35.375 - 35.607: 99.6861% ( 1) 00:08:03.891 35.840 - 36.073: 99.6982% ( 1) 00:08:03.891 36.305 - 36.538: 99.7103% ( 1) 00:08:03.891 37.469 - 37.702: 99.7224% ( 1) 00:08:03.891 38.400 - 38.633: 99.7344% ( 1) 00:08:03.891 39.331 - 39.564: 99.7706% ( 3) 00:08:03.891 39.564 - 39.796: 99.8069% ( 3) 00:08:03.891 39.796 - 40.029: 99.8310% ( 2) 00:08:03.891 40.029 - 40.262: 99.8431% ( 1) 00:08:03.891 40.262 - 40.495: 99.8672% ( 2) 00:08:03.891 41.425 - 41.658: 99.8914% ( 2) 00:08:03.891 43.055 - 43.287: 99.9034% ( 1) 00:08:03.891 44.451 - 44.684: 99.9155% ( 1) 00:08:03.891 46.080 - 46.313: 99.9276% ( 1) 00:08:03.891 46.545 - 46.778: 99.9396% ( 1) 00:08:03.891 47.011 - 47.244: 99.9517% ( 1) 00:08:03.892 48.407 - 48.640: 99.9638% ( 1) 00:08:03.892 51.200 - 51.433: 99.9759% ( 1) 00:08:03.892 52.364 - 52.596: 99.9879% ( 1) 00:08:03.892 99.607 - 100.073: 100.0000% ( 1) 00:08:03.892 00:08:03.892 ************************************ 00:08:03.892 END TEST nvme_overhead 00:08:03.892 ************************************ 00:08:03.892 00:08:03.892 real 0m1.315s 00:08:03.892 user 0m1.116s 00:08:03.892 sys 0m0.150s 00:08:03.892 08:08:08 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.892 08:08:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:03.892 08:08:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:03.892 08:08:08 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:03.892 08:08:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.892 08:08:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.892 ************************************ 00:08:03.892 START TEST nvme_arbitration 00:08:03.892 ************************************ 00:08:03.892 08:08:08 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:07.178 Initializing NVMe Controllers 00:08:07.178 Attached to 0000:00:10.0 00:08:07.178 Attached to 0000:00:11.0 00:08:07.178 Attached to 0000:00:13.0 00:08:07.178 Attached to 0000:00:12.0 00:08:07.178 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:07.178 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:07.178 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:07.178 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:07.178 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:07.178 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:07.178 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:07.178 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:07.178 Initialization complete. Launching workers. 00:08:07.178 Starting thread on core 1 with urgent priority queue 00:08:07.178 Starting thread on core 2 with urgent priority queue 00:08:07.178 Starting thread on core 3 with urgent priority queue 00:08:07.178 Starting thread on core 0 with urgent priority queue 00:08:07.178 QEMU NVMe Ctrl (12340 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:08:07.178 QEMU NVMe Ctrl (12342 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:08:07.178 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:08:07.178 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:08:07.178 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:08:07.178 QEMU NVMe Ctrl (12342 ) core 3: 725.33 IO/s 137.87 secs/100000 ios 00:08:07.178 ======================================================== 00:08:07.178 00:08:07.178 ************************************ 00:08:07.178 END TEST nvme_arbitration 00:08:07.178 ************************************ 00:08:07.178 00:08:07.178 real 0m3.418s 00:08:07.178 user 0m9.329s 00:08:07.178 sys 0m0.164s 00:08:07.178 08:08:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.178 08:08:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:07.438 08:08:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:07.438 08:08:12 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.438 08:08:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.438 08:08:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.438 ************************************ 00:08:07.438 START TEST nvme_single_aen 00:08:07.438 ************************************ 00:08:07.438 08:08:12 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:07.697 Asynchronous Event Request test 00:08:07.697 Attached to 0000:00:10.0 00:08:07.697 Attached to 0000:00:11.0 00:08:07.697 Attached to 0000:00:13.0 00:08:07.697 Attached to 0000:00:12.0 00:08:07.697 Reset controller to setup AER completions for this process 00:08:07.697 Registering asynchronous event callbacks... 00:08:07.697 Getting orig temperature thresholds of all controllers 00:08:07.697 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.697 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.697 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.697 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.697 Setting all controllers temperature threshold low to trigger AER 00:08:07.697 Waiting for all controllers temperature threshold to be set lower 00:08:07.697 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.697 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:07.697 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.697 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:07.697 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.697 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:07.697 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.697 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:07.697 Waiting for all controllers to trigger AER and reset threshold 00:08:07.697 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.697 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.697 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.697 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.697 Cleaning up... 00:08:07.697 ************************************ 00:08:07.697 END TEST nvme_single_aen 00:08:07.697 ************************************ 00:08:07.697 00:08:07.697 real 0m0.319s 00:08:07.697 user 0m0.113s 00:08:07.697 sys 0m0.145s 00:08:07.697 08:08:12 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.697 08:08:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:07.697 08:08:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:07.697 08:08:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.697 08:08:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.697 08:08:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.697 ************************************ 00:08:07.697 START TEST nvme_doorbell_aers 00:08:07.697 ************************************ 00:08:07.697 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:07.698 08:08:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:07.957 [2024-11-17 08:08:12.893426] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:17.930 Executing: test_write_invalid_db 00:08:17.930 Waiting for AER completion... 00:08:17.930 Failure: test_write_invalid_db 00:08:17.930 00:08:17.930 Executing: test_invalid_db_write_overflow_sq 00:08:17.930 Waiting for AER completion... 00:08:17.930 Failure: test_invalid_db_write_overflow_sq 00:08:17.930 00:08:17.930 Executing: test_invalid_db_write_overflow_cq 00:08:17.930 Waiting for AER completion... 00:08:17.930 Failure: test_invalid_db_write_overflow_cq 00:08:17.930 00:08:17.930 08:08:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.930 08:08:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:18.188 [2024-11-17 08:08:23.014973] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:28.161 Executing: test_write_invalid_db 00:08:28.161 Waiting for AER completion... 00:08:28.161 Failure: test_write_invalid_db 00:08:28.161 00:08:28.161 Executing: test_invalid_db_write_overflow_sq 00:08:28.161 Waiting for AER completion... 00:08:28.161 Failure: test_invalid_db_write_overflow_sq 00:08:28.161 00:08:28.161 Executing: test_invalid_db_write_overflow_cq 00:08:28.161 Waiting for AER completion... 00:08:28.161 Failure: test_invalid_db_write_overflow_cq 00:08:28.161 00:08:28.161 08:08:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.161 08:08:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:28.161 [2024-11-17 08:08:33.047863] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:38.134 Executing: test_write_invalid_db 00:08:38.134 Waiting for AER completion... 00:08:38.134 Failure: test_write_invalid_db 00:08:38.134 00:08:38.134 Executing: test_invalid_db_write_overflow_sq 00:08:38.134 Waiting for AER completion... 00:08:38.134 Failure: test_invalid_db_write_overflow_sq 00:08:38.134 00:08:38.134 Executing: test_invalid_db_write_overflow_cq 00:08:38.134 Waiting for AER completion... 00:08:38.134 Failure: test_invalid_db_write_overflow_cq 00:08:38.134 00:08:38.134 08:08:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:38.134 08:08:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:38.134 [2024-11-17 08:08:43.100269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 Executing: test_write_invalid_db 00:08:48.168 Waiting for AER completion... 00:08:48.168 Failure: test_write_invalid_db 00:08:48.168 00:08:48.168 Executing: test_invalid_db_write_overflow_sq 00:08:48.168 Waiting for AER completion... 00:08:48.168 Failure: test_invalid_db_write_overflow_sq 00:08:48.168 00:08:48.168 Executing: test_invalid_db_write_overflow_cq 00:08:48.168 Waiting for AER completion... 00:08:48.168 Failure: test_invalid_db_write_overflow_cq 00:08:48.168 00:08:48.168 00:08:48.168 real 0m40.243s 00:08:48.168 user 0m34.129s 00:08:48.168 sys 0m5.781s 00:08:48.168 08:08:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.168 08:08:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:48.168 ************************************ 00:08:48.168 END TEST nvme_doorbell_aers 00:08:48.168 ************************************ 00:08:48.168 08:08:52 nvme -- nvme/nvme.sh@97 -- # uname 00:08:48.168 08:08:52 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:48.168 08:08:52 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:48.168 08:08:52 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:48.168 08:08:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.168 08:08:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.168 ************************************ 00:08:48.168 START TEST nvme_multi_aen 00:08:48.168 ************************************ 00:08:48.168 08:08:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:48.168 [2024-11-17 08:08:53.170927] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 [2024-11-17 08:08:53.171302] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 [2024-11-17 08:08:53.171505] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 [2024-11-17 08:08:53.173145] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 [2024-11-17 08:08:53.173211] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 [2024-11-17 08:08:53.173227] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 [2024-11-17 08:08:53.174411] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.168 [2024-11-17 08:08:53.174464] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.169 [2024-11-17 08:08:53.174480] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.169 [2024-11-17 08:08:53.175905] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.169 [2024-11-17 08:08:53.175945] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.169 [2024-11-17 08:08:53.175962] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64085) is not found. Dropping the request. 00:08:48.427 Child process pid: 64606 00:08:48.686 [Child] Asynchronous Event Request test 00:08:48.686 [Child] Attached to 0000:00:10.0 00:08:48.686 [Child] Attached to 0000:00:11.0 00:08:48.686 [Child] Attached to 0000:00:13.0 00:08:48.686 [Child] Attached to 0000:00:12.0 00:08:48.686 [Child] Registering asynchronous event callbacks... 00:08:48.686 [Child] Getting orig temperature thresholds of all controllers 00:08:48.686 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:48.686 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.686 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.686 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.686 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.686 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.686 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.686 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.686 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.686 [Child] Cleaning up... 00:08:48.686 Asynchronous Event Request test 00:08:48.686 Attached to 0000:00:10.0 00:08:48.686 Attached to 0000:00:11.0 00:08:48.686 Attached to 0000:00:13.0 00:08:48.686 Attached to 0000:00:12.0 00:08:48.686 Reset controller to setup AER completions for this process 00:08:48.686 Registering asynchronous event callbacks... 00:08:48.686 Getting orig temperature thresholds of all controllers 00:08:48.686 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.686 Setting all controllers temperature threshold low to trigger AER 00:08:48.686 Waiting for all controllers temperature threshold to be set lower 00:08:48.687 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.687 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:48.687 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.687 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:48.687 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.687 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:48.687 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.687 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:48.687 Waiting for all controllers to trigger AER and reset threshold 00:08:48.687 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.687 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.687 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.687 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.687 Cleaning up... 00:08:48.687 00:08:48.687 real 0m0.650s 00:08:48.687 user 0m0.225s 00:08:48.687 sys 0m0.296s 00:08:48.687 08:08:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.687 08:08:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:48.687 ************************************ 00:08:48.687 END TEST nvme_multi_aen 00:08:48.687 ************************************ 00:08:48.687 08:08:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:48.687 08:08:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:48.687 08:08:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.687 08:08:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.687 ************************************ 00:08:48.687 START TEST nvme_startup 00:08:48.687 ************************************ 00:08:48.687 08:08:53 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:48.946 Initializing NVMe Controllers 00:08:48.946 Attached to 0000:00:10.0 00:08:48.946 Attached to 0000:00:11.0 00:08:48.946 Attached to 0000:00:13.0 00:08:48.946 Attached to 0000:00:12.0 00:08:48.946 Initialization complete. 00:08:48.946 Time used:212778.484 (us). 00:08:48.946 ************************************ 00:08:48.946 END TEST nvme_startup 00:08:48.946 ************************************ 00:08:48.946 00:08:48.946 real 0m0.296s 00:08:48.946 user 0m0.115s 00:08:48.946 sys 0m0.136s 00:08:48.946 08:08:53 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.946 08:08:53 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:48.946 08:08:53 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:48.946 08:08:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.946 08:08:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.946 08:08:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.946 ************************************ 00:08:48.946 START TEST nvme_multi_secondary 00:08:48.946 ************************************ 00:08:48.946 08:08:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:48.946 08:08:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64662 00:08:48.946 08:08:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:48.946 08:08:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64663 00:08:48.946 08:08:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:48.946 08:08:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:53.137 Initializing NVMe Controllers 00:08:53.137 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.137 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.137 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.137 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.137 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:53.137 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:53.137 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:53.137 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:53.137 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:53.137 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:53.137 Initialization complete. Launching workers. 00:08:53.137 ======================================================== 00:08:53.137 Latency(us) 00:08:53.137 Device Information : IOPS MiB/s Average min max 00:08:53.137 PCIE (0000:00:10.0) NSID 1 from core 2: 2580.38 10.08 6197.59 1565.47 12126.16 00:08:53.137 PCIE (0000:00:11.0) NSID 1 from core 2: 2580.38 10.08 6200.82 1801.76 12263.25 00:08:53.137 PCIE (0000:00:13.0) NSID 1 from core 2: 2580.38 10.08 6200.66 1987.74 12044.56 00:08:53.137 PCIE (0000:00:12.0) NSID 1 from core 2: 2580.38 10.08 6200.55 1858.82 12377.57 00:08:53.137 PCIE (0000:00:12.0) NSID 2 from core 2: 2580.38 10.08 6200.66 1767.16 12268.69 00:08:53.137 PCIE (0000:00:12.0) NSID 3 from core 2: 2580.38 10.08 6199.95 1715.90 12312.34 00:08:53.137 ======================================================== 00:08:53.137 Total : 15482.26 60.48 6200.04 1565.47 12377.57 00:08:53.137 00:08:53.137 08:08:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64662 00:08:53.137 Initializing NVMe Controllers 00:08:53.137 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.137 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.137 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.137 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.137 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:53.137 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:53.137 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:53.137 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:53.137 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:53.137 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:53.137 Initialization complete. Launching workers. 00:08:53.137 ======================================================== 00:08:53.137 Latency(us) 00:08:53.137 Device Information : IOPS MiB/s Average min max 00:08:53.137 PCIE (0000:00:10.0) NSID 1 from core 1: 5191.19 20.28 3080.39 1565.68 12403.92 00:08:53.137 PCIE (0000:00:11.0) NSID 1 from core 1: 5191.19 20.28 3081.67 1679.05 12665.48 00:08:53.137 PCIE (0000:00:13.0) NSID 1 from core 1: 5191.19 20.28 3081.51 1710.74 12950.44 00:08:53.137 PCIE (0000:00:12.0) NSID 1 from core 1: 5191.19 20.28 3081.46 1592.78 12891.86 00:08:53.137 PCIE (0000:00:12.0) NSID 2 from core 1: 5191.19 20.28 3081.41 1594.11 12438.72 00:08:53.137 PCIE (0000:00:12.0) NSID 3 from core 1: 5191.19 20.28 3081.44 1334.27 12627.22 00:08:53.137 ======================================================== 00:08:53.137 Total : 31147.12 121.67 3081.31 1334.27 12950.44 00:08:53.137 00:08:54.513 Initializing NVMe Controllers 00:08:54.513 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.513 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.513 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.513 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.513 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:54.513 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:54.513 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:54.513 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:54.513 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:54.513 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:54.513 Initialization complete. Launching workers. 00:08:54.513 ======================================================== 00:08:54.513 Latency(us) 00:08:54.513 Device Information : IOPS MiB/s Average min max 00:08:54.513 PCIE (0000:00:10.0) NSID 1 from core 0: 8108.53 31.67 1971.83 982.90 5428.22 00:08:54.513 PCIE (0000:00:11.0) NSID 1 from core 0: 8108.53 31.67 1972.82 986.35 5993.25 00:08:54.513 PCIE (0000:00:13.0) NSID 1 from core 0: 8108.53 31.67 1972.80 910.92 6218.71 00:08:54.513 PCIE (0000:00:12.0) NSID 1 from core 0: 8108.53 31.67 1972.77 896.96 6098.27 00:08:54.514 PCIE (0000:00:12.0) NSID 2 from core 0: 8108.53 31.67 1972.75 809.96 6020.11 00:08:54.514 PCIE (0000:00:12.0) NSID 3 from core 0: 8108.53 31.67 1972.72 743.31 5936.55 00:08:54.514 ======================================================== 00:08:54.514 Total : 48651.17 190.04 1972.62 743.31 6218.71 00:08:54.514 00:08:54.514 08:08:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64663 00:08:54.514 08:08:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64732 00:08:54.514 08:08:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:54.514 08:08:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64733 00:08:54.514 08:08:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:54.514 08:08:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:57.800 Initializing NVMe Controllers 00:08:57.800 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.800 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.800 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.800 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.800 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:57.800 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:57.800 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:57.800 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:57.800 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:57.800 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:57.800 Initialization complete. Launching workers. 00:08:57.800 ======================================================== 00:08:57.800 Latency(us) 00:08:57.800 Device Information : IOPS MiB/s Average min max 00:08:57.800 PCIE (0000:00:10.0) NSID 1 from core 0: 5183.61 20.25 3085.01 1153.42 5701.30 00:08:57.801 PCIE (0000:00:11.0) NSID 1 from core 0: 5183.61 20.25 3086.46 1148.23 5918.10 00:08:57.801 PCIE (0000:00:13.0) NSID 1 from core 0: 5183.61 20.25 3086.65 1182.42 5774.24 00:08:57.801 PCIE (0000:00:12.0) NSID 1 from core 0: 5183.61 20.25 3086.70 1177.05 5547.94 00:08:57.801 PCIE (0000:00:12.0) NSID 2 from core 0: 5183.61 20.25 3086.67 1179.89 5392.64 00:08:57.801 PCIE (0000:00:12.0) NSID 3 from core 0: 5183.61 20.25 3086.88 1179.63 5494.04 00:08:57.801 ======================================================== 00:08:57.801 Total : 31101.64 121.49 3086.40 1148.23 5918.10 00:08:57.801 00:08:58.060 Initializing NVMe Controllers 00:08:58.060 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.060 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.060 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.060 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.060 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:58.060 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:58.060 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:58.060 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:58.060 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:58.060 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:58.060 Initialization complete. Launching workers. 00:08:58.060 ======================================================== 00:08:58.060 Latency(us) 00:08:58.060 Device Information : IOPS MiB/s Average min max 00:08:58.060 PCIE (0000:00:10.0) NSID 1 from core 1: 5359.89 20.94 2983.00 1070.55 10837.66 00:08:58.060 PCIE (0000:00:11.0) NSID 1 from core 1: 5359.89 20.94 2984.46 1077.53 10751.88 00:08:58.060 PCIE (0000:00:13.0) NSID 1 from core 1: 5359.89 20.94 2984.34 1005.39 10815.91 00:08:58.060 PCIE (0000:00:12.0) NSID 1 from core 1: 5359.89 20.94 2984.21 965.15 11108.72 00:08:58.060 PCIE (0000:00:12.0) NSID 2 from core 1: 5359.89 20.94 2984.18 1026.02 11486.07 00:08:58.060 PCIE (0000:00:12.0) NSID 3 from core 1: 5359.89 20.94 2984.06 979.32 10919.89 00:08:58.060 ======================================================== 00:08:58.060 Total : 32159.34 125.62 2984.04 965.15 11486.07 00:08:58.060 00:08:59.964 Initializing NVMe Controllers 00:08:59.964 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.964 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.964 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.964 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.964 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:59.964 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:59.964 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:59.964 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:59.964 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:59.964 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:59.964 Initialization complete. Launching workers. 00:08:59.964 ======================================================== 00:08:59.964 Latency(us) 00:08:59.964 Device Information : IOPS MiB/s Average min max 00:08:59.964 PCIE (0000:00:10.0) NSID 1 from core 2: 3572.33 13.95 4476.57 1053.25 13131.52 00:08:59.964 PCIE (0000:00:11.0) NSID 1 from core 2: 3572.33 13.95 4477.70 997.94 13000.67 00:08:59.964 PCIE (0000:00:13.0) NSID 1 from core 2: 3572.33 13.95 4478.49 1048.14 13006.21 00:08:59.964 PCIE (0000:00:12.0) NSID 1 from core 2: 3572.33 13.95 4478.23 1033.10 12886.33 00:08:59.964 PCIE (0000:00:12.0) NSID 2 from core 2: 3572.33 13.95 4474.31 1043.73 12964.65 00:08:59.964 PCIE (0000:00:12.0) NSID 3 from core 2: 3572.33 13.95 4473.91 783.77 12476.84 00:08:59.964 ======================================================== 00:08:59.964 Total : 21433.98 83.73 4476.53 783.77 13131.52 00:08:59.964 00:08:59.964 08:09:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64732 00:08:59.964 08:09:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64733 00:08:59.964 00:08:59.964 real 0m10.705s 00:08:59.964 user 0m18.638s 00:08:59.964 sys 0m1.006s 00:08:59.964 08:09:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.964 08:09:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:59.964 ************************************ 00:08:59.964 END TEST nvme_multi_secondary 00:08:59.964 ************************************ 00:08:59.965 08:09:04 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:59.965 08:09:04 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63666 ]] 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@1094 -- # kill 63666 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@1095 -- # wait 63666 00:08:59.965 [2024-11-17 08:09:04.696452] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.696790] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.696847] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.696878] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.699973] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.700309] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.700348] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.700377] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.703217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.703272] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.703294] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.703338] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.705534] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.705590] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.705612] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 [2024-11-17 08:09:04.705631] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64605) is not found. Dropping the request. 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:59.965 08:09:04 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.965 08:09:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.965 ************************************ 00:08:59.965 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:59.965 ************************************ 00:08:59.965 08:09:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:59.965 * Looking for test storage... 00:09:00.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:00.224 08:09:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.224 08:09:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.224 08:09:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.224 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.225 --rc genhtml_branch_coverage=1 00:09:00.225 --rc genhtml_function_coverage=1 00:09:00.225 --rc genhtml_legend=1 00:09:00.225 --rc geninfo_all_blocks=1 00:09:00.225 --rc geninfo_unexecuted_blocks=1 00:09:00.225 00:09:00.225 ' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.225 --rc genhtml_branch_coverage=1 00:09:00.225 --rc genhtml_function_coverage=1 00:09:00.225 --rc genhtml_legend=1 00:09:00.225 --rc geninfo_all_blocks=1 00:09:00.225 --rc geninfo_unexecuted_blocks=1 00:09:00.225 00:09:00.225 ' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.225 --rc genhtml_branch_coverage=1 00:09:00.225 --rc genhtml_function_coverage=1 00:09:00.225 --rc genhtml_legend=1 00:09:00.225 --rc geninfo_all_blocks=1 00:09:00.225 --rc geninfo_unexecuted_blocks=1 00:09:00.225 00:09:00.225 ' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.225 --rc genhtml_branch_coverage=1 00:09:00.225 --rc genhtml_function_coverage=1 00:09:00.225 --rc genhtml_legend=1 00:09:00.225 --rc geninfo_all_blocks=1 00:09:00.225 --rc geninfo_unexecuted_blocks=1 00:09:00.225 00:09:00.225 ' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64895 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64895 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64895 ']' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.225 08:09:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.484 [2024-11-17 08:09:05.286624] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:00.484 [2024-11-17 08:09:05.286794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64895 ] 00:09:00.742 [2024-11-17 08:09:05.496433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.742 [2024-11-17 08:09:05.627832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.742 [2024-11-17 08:09:05.627978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.742 [2024-11-17 08:09:05.628165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.742 [2024-11-17 08:09:05.628177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.309 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.309 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:01.310 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:01.310 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.310 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.568 nvme0n1 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_1MFZE.txt 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.568 true 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731830946 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64923 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:01.568 08:09:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.470 [2024-11-17 08:09:08.417834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:03.470 [2024-11-17 08:09:08.418315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:03.470 [2024-11-17 08:09:08.418365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:03.470 [2024-11-17 08:09:08.418386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:03.470 [2024-11-17 08:09:08.420511] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.470 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64923 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64923 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64923 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:03.470 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_1MFZE.txt 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_1MFZE.txt 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64895 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64895 ']' 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64895 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64895 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.729 killing process with pid 64895 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64895' 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64895 00:09:03.729 08:09:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64895 00:09:05.631 08:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:05.631 08:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:05.631 00:09:05.631 real 0m5.430s 00:09:05.631 user 0m19.049s 00:09:05.631 sys 0m0.640s 00:09:05.631 08:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.631 ************************************ 00:09:05.631 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:05.631 ************************************ 00:09:05.631 08:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.631 08:09:10 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:05.631 08:09:10 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:05.631 08:09:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.631 08:09:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.631 08:09:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.631 ************************************ 00:09:05.631 START TEST nvme_fio 00:09:05.631 ************************************ 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:05.631 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:05.631 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:05.890 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:05.890 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.149 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:06.149 08:09:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:06.149 08:09:10 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.408 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:06.408 fio-3.35 00:09:06.408 Starting 1 thread 00:09:09.697 00:09:09.697 test: (groupid=0, jobs=1): err= 0: pid=65063: Sun Nov 17 08:09:13 2024 00:09:09.697 read: IOPS=12.3k, BW=48.1MiB/s (50.5MB/s)(96.3MiB/2001msec) 00:09:09.697 slat (nsec): min=4151, max=74910, avg=7497.93, stdev=4635.62 00:09:09.697 clat (usec): min=290, max=11048, avg=5175.22, stdev=474.05 00:09:09.697 lat (usec): min=295, max=11110, avg=5182.72, stdev=474.48 00:09:09.697 clat percentiles (usec): 00:09:09.697 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4686], 20.00th=[ 4883], 00:09:09.697 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5276], 00:09:09.697 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5866], 00:09:09.697 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 7963], 99.95th=[ 9634], 00:09:09.697 | 99.99th=[10945] 00:09:09.697 bw ( KiB/s): min=48240, max=49432, per=99.26%, avg=48925.33, stdev=615.76, samples=3 00:09:09.697 iops : min=12060, max=12358, avg=12231.33, stdev=153.94, samples=3 00:09:09.697 write: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(96.0MiB/2001msec); 0 zone resets 00:09:09.697 slat (nsec): min=4221, max=58440, avg=7726.47, stdev=4630.20 00:09:09.697 clat (usec): min=236, max=10864, avg=5187.75, stdev=472.54 00:09:09.697 lat (usec): min=257, max=10882, avg=5195.48, stdev=473.00 00:09:09.697 clat percentiles (usec): 00:09:09.697 | 1.00th=[ 3752], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 4883], 00:09:09.697 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5211], 60.00th=[ 5276], 00:09:09.697 | 70.00th=[ 5407], 80.00th=[ 5538], 90.00th=[ 5669], 95.00th=[ 5866], 00:09:09.697 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 8356], 99.95th=[ 9503], 00:09:09.697 | 99.99th=[10552] 00:09:09.697 bw ( KiB/s): min=48496, max=49592, per=99.65%, avg=48970.67, stdev=562.53, samples=3 00:09:09.697 iops : min=12124, max=12398, avg=12242.67, stdev=140.63, samples=3 00:09:09.697 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:09.697 lat (msec) : 2=0.05%, 4=1.62%, 10=98.25%, 20=0.04% 00:09:09.697 cpu : usr=98.40%, sys=0.30%, ctx=6, majf=0, minf=608 00:09:09.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.698 issued rwts: total=24656,24584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.698 00:09:09.698 Run status group 0 (all jobs): 00:09:09.698 READ: bw=48.1MiB/s (50.5MB/s), 48.1MiB/s-48.1MiB/s (50.5MB/s-50.5MB/s), io=96.3MiB (101MB), run=2001-2001msec 00:09:09.698 WRITE: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=96.0MiB (101MB), run=2001-2001msec 00:09:09.698 ----------------------------------------------------- 00:09:09.698 Suppressions used: 00:09:09.698 count bytes template 00:09:09.698 1 32 /usr/src/fio/parse.c 00:09:09.698 1 8 libtcmalloc_minimal.so 00:09:09.698 ----------------------------------------------------- 00:09:09.698 00:09:09.698 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.698 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:09.698 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:09.698 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:09.698 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:09.698 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:09.957 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:09.957 08:09:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.957 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:09.958 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:09.958 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:09.958 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:09.958 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:09.958 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:09.958 08:09:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:10.217 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:10.217 fio-3.35 00:09:10.217 Starting 1 thread 00:09:13.509 00:09:13.509 test: (groupid=0, jobs=1): err= 0: pid=65129: Sun Nov 17 08:09:17 2024 00:09:13.509 read: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2001msec) 00:09:13.509 slat (nsec): min=3935, max=51514, avg=6407.42, stdev=3938.30 00:09:13.509 clat (usec): min=348, max=9858, avg=4642.08, stdev=544.79 00:09:13.509 lat (usec): min=367, max=9906, avg=4648.49, stdev=545.36 00:09:13.509 clat percentiles (usec): 00:09:13.509 | 1.00th=[ 3523], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4228], 00:09:13.509 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4752], 00:09:13.509 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5538], 00:09:13.509 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 8029], 99.95th=[ 8717], 00:09:13.509 | 99.99th=[ 9765] 00:09:13.509 bw ( KiB/s): min=49160, max=57224, per=98.31%, avg=53992.00, stdev=4263.45, samples=3 00:09:13.509 iops : min=12290, max=14306, avg=13498.00, stdev=1065.86, samples=3 00:09:13.509 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2001msec); 0 zone resets 00:09:13.509 slat (nsec): min=4167, max=66739, avg=6517.05, stdev=3979.00 00:09:13.509 clat (usec): min=293, max=9727, avg=4654.96, stdev=547.05 00:09:13.509 lat (usec): min=298, max=9744, avg=4661.48, stdev=547.62 00:09:13.509 clat percentiles (usec): 00:09:13.509 | 1.00th=[ 3523], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4228], 00:09:13.509 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4752], 00:09:13.509 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5538], 00:09:13.509 | 99.00th=[ 5866], 99.50th=[ 6128], 99.90th=[ 8094], 99.95th=[ 8586], 00:09:13.509 | 99.99th=[ 9503] 00:09:13.509 bw ( KiB/s): min=49640, max=56840, per=98.59%, avg=54053.33, stdev=3865.82, samples=3 00:09:13.509 iops : min=12410, max=14210, avg=13513.33, stdev=966.45, samples=3 00:09:13.510 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:13.510 lat (msec) : 2=0.07%, 4=9.73%, 10=90.16% 00:09:13.510 cpu : usr=98.80%, sys=0.05%, ctx=7, majf=0, minf=607 00:09:13.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.510 issued rwts: total=27474,27426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.510 00:09:13.510 Run status group 0 (all jobs): 00:09:13.510 READ: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2001-2001msec 00:09:13.510 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (112MB), run=2001-2001msec 00:09:13.510 ----------------------------------------------------- 00:09:13.510 Suppressions used: 00:09:13.510 count bytes template 00:09:13.510 1 32 /usr/src/fio/parse.c 00:09:13.510 1 8 libtcmalloc_minimal.so 00:09:13.510 ----------------------------------------------------- 00:09:13.510 00:09:13.510 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.510 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.510 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:13.510 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.510 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:13.510 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:14.079 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:14.079 08:09:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:14.079 08:09:18 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:14.079 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:14.079 fio-3.35 00:09:14.079 Starting 1 thread 00:09:17.370 00:09:17.370 test: (groupid=0, jobs=1): err= 0: pid=65190: Sun Nov 17 08:09:21 2024 00:09:17.370 read: IOPS=11.9k, BW=46.7MiB/s (48.9MB/s)(93.4MiB/2001msec) 00:09:17.370 slat (nsec): min=4241, max=67315, avg=7856.73, stdev=4759.40 00:09:17.370 clat (usec): min=289, max=9480, avg=5345.31, stdev=505.03 00:09:17.370 lat (usec): min=294, max=9516, avg=5353.16, stdev=505.79 00:09:17.370 clat percentiles (usec): 00:09:17.370 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:09:17.370 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5342], 00:09:17.370 | 70.00th=[ 5407], 80.00th=[ 5538], 90.00th=[ 5800], 95.00th=[ 6063], 00:09:17.370 | 99.00th=[ 7701], 99.50th=[ 8160], 99.90th=[ 8586], 99.95th=[ 8717], 00:09:17.370 | 99.99th=[ 9503] 00:09:17.370 bw ( KiB/s): min=45952, max=48712, per=98.93%, avg=47290.67, stdev=1381.86, samples=3 00:09:17.370 iops : min=11488, max=12178, avg=11822.67, stdev=345.46, samples=3 00:09:17.370 write: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(93.0MiB/2001msec); 0 zone resets 00:09:17.370 slat (nsec): min=4431, max=70800, avg=8318.02, stdev=4927.72 00:09:17.370 clat (usec): min=405, max=9412, avg=5352.46, stdev=490.98 00:09:17.370 lat (usec): min=411, max=9425, avg=5360.78, stdev=491.74 00:09:17.370 clat percentiles (usec): 00:09:17.370 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:09:17.370 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5342], 00:09:17.370 | 70.00th=[ 5473], 80.00th=[ 5538], 90.00th=[ 5800], 95.00th=[ 6063], 00:09:17.370 | 99.00th=[ 7635], 99.50th=[ 8094], 99.90th=[ 8586], 99.95th=[ 8717], 00:09:17.370 | 99.99th=[ 9241] 00:09:17.370 bw ( KiB/s): min=46296, max=49240, per=99.54%, avg=47373.33, stdev=1622.98, samples=3 00:09:17.370 iops : min=11574, max=12310, avg=11843.33, stdev=405.75, samples=3 00:09:17.370 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:17.370 lat (msec) : 2=0.05%, 4=0.16%, 10=99.75% 00:09:17.370 cpu : usr=98.60%, sys=0.20%, ctx=3, majf=0, minf=607 00:09:17.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:17.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.370 issued rwts: total=23913,23807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.370 00:09:17.370 Run status group 0 (all jobs): 00:09:17.370 READ: bw=46.7MiB/s (48.9MB/s), 46.7MiB/s-46.7MiB/s (48.9MB/s-48.9MB/s), io=93.4MiB (97.9MB), run=2001-2001msec 00:09:17.370 WRITE: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=93.0MiB (97.5MB), run=2001-2001msec 00:09:17.370 ----------------------------------------------------- 00:09:17.370 Suppressions used: 00:09:17.370 count bytes template 00:09:17.370 1 32 /usr/src/fio/parse.c 00:09:17.370 1 8 libtcmalloc_minimal.so 00:09:17.370 ----------------------------------------------------- 00:09:17.370 00:09:17.370 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:17.370 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:17.370 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:17.370 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:17.630 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:17.630 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:17.889 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:17.889 08:09:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:17.889 08:09:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:18.148 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:18.148 fio-3.35 00:09:18.148 Starting 1 thread 00:09:22.341 00:09:22.341 test: (groupid=0, jobs=1): err= 0: pid=65250: Sun Nov 17 08:09:26 2024 00:09:22.341 read: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(103MiB/2001msec) 00:09:22.341 slat (nsec): min=4117, max=66892, avg=6872.17, stdev=4213.54 00:09:22.341 clat (usec): min=377, max=9655, avg=4854.97, stdev=560.40 00:09:22.341 lat (usec): min=382, max=9706, avg=4861.85, stdev=561.16 00:09:22.341 clat percentiles (usec): 00:09:22.341 | 1.00th=[ 3621], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 4490], 00:09:22.341 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:09:22.341 | 70.00th=[ 5014], 80.00th=[ 5211], 90.00th=[ 5604], 95.00th=[ 5800], 00:09:22.341 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 8848], 99.95th=[ 8979], 00:09:22.341 | 99.99th=[ 9503] 00:09:22.341 bw ( KiB/s): min=47496, max=54104, per=97.24%, avg=51013.33, stdev=3324.60, samples=3 00:09:22.341 iops : min=11874, max=13526, avg=12753.33, stdev=831.15, samples=3 00:09:22.341 write: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(103MiB/2001msec); 0 zone resets 00:09:22.341 slat (nsec): min=4216, max=58938, avg=7223.81, stdev=4370.73 00:09:22.341 clat (usec): min=326, max=9487, avg=4867.77, stdev=576.84 00:09:22.341 lat (usec): min=331, max=9505, avg=4875.00, stdev=577.60 00:09:22.341 clat percentiles (usec): 00:09:22.341 | 1.00th=[ 3621], 5.00th=[ 3982], 10.00th=[ 4228], 20.00th=[ 4490], 00:09:22.341 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:09:22.341 | 70.00th=[ 5080], 80.00th=[ 5276], 90.00th=[ 5604], 95.00th=[ 5800], 00:09:22.341 | 99.00th=[ 6128], 99.50th=[ 7308], 99.90th=[ 8979], 99.95th=[ 8979], 00:09:22.341 | 99.99th=[ 9241] 00:09:22.341 bw ( KiB/s): min=47536, max=54008, per=97.33%, avg=51066.67, stdev=3276.00, samples=3 00:09:22.341 iops : min=11884, max=13502, avg=12766.67, stdev=819.00, samples=3 00:09:22.341 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:22.341 lat (msec) : 2=0.04%, 4=5.20%, 10=94.73% 00:09:22.341 cpu : usr=98.70%, sys=0.05%, ctx=4, majf=0, minf=605 00:09:22.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:22.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.341 issued rwts: total=26244,26246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.341 00:09:22.341 Run status group 0 (all jobs): 00:09:22.341 READ: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=103MiB (107MB), run=2001-2001msec 00:09:22.341 WRITE: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=103MiB (108MB), run=2001-2001msec 00:09:22.341 ----------------------------------------------------- 00:09:22.341 Suppressions used: 00:09:22.341 count bytes template 00:09:22.341 1 32 /usr/src/fio/parse.c 00:09:22.341 1 8 libtcmalloc_minimal.so 00:09:22.341 ----------------------------------------------------- 00:09:22.341 00:09:22.341 08:09:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:22.341 08:09:26 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:22.341 00:09:22.341 real 0m16.457s 00:09:22.341 user 0m13.037s 00:09:22.341 sys 0m2.121s 00:09:22.341 08:09:26 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.341 08:09:26 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:22.341 ************************************ 00:09:22.341 END TEST nvme_fio 00:09:22.341 ************************************ 00:09:22.341 00:09:22.341 real 1m29.680s 00:09:22.341 user 3m43.027s 00:09:22.341 sys 0m14.467s 00:09:22.341 08:09:26 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.341 08:09:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.341 ************************************ 00:09:22.341 END TEST nvme 00:09:22.341 ************************************ 00:09:22.341 08:09:26 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:22.341 08:09:26 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:22.341 08:09:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.341 08:09:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.341 08:09:26 -- common/autotest_common.sh@10 -- # set +x 00:09:22.341 ************************************ 00:09:22.341 START TEST nvme_scc 00:09:22.341 ************************************ 00:09:22.341 08:09:26 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:22.341 * Looking for test storage... 00:09:22.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:22.341 08:09:27 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.341 08:09:27 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.341 08:09:27 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.341 08:09:27 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:22.341 08:09:27 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:22.342 08:09:27 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.342 08:09:27 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.342 --rc genhtml_branch_coverage=1 00:09:22.342 --rc genhtml_function_coverage=1 00:09:22.342 --rc genhtml_legend=1 00:09:22.342 --rc geninfo_all_blocks=1 00:09:22.342 --rc geninfo_unexecuted_blocks=1 00:09:22.342 00:09:22.342 ' 00:09:22.342 08:09:27 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.342 --rc genhtml_branch_coverage=1 00:09:22.342 --rc genhtml_function_coverage=1 00:09:22.342 --rc genhtml_legend=1 00:09:22.342 --rc geninfo_all_blocks=1 00:09:22.342 --rc geninfo_unexecuted_blocks=1 00:09:22.342 00:09:22.342 ' 00:09:22.342 08:09:27 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.342 --rc genhtml_branch_coverage=1 00:09:22.342 --rc genhtml_function_coverage=1 00:09:22.342 --rc genhtml_legend=1 00:09:22.342 --rc geninfo_all_blocks=1 00:09:22.342 --rc geninfo_unexecuted_blocks=1 00:09:22.342 00:09:22.342 ' 00:09:22.342 08:09:27 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.342 --rc genhtml_branch_coverage=1 00:09:22.342 --rc genhtml_function_coverage=1 00:09:22.342 --rc genhtml_legend=1 00:09:22.342 --rc geninfo_all_blocks=1 00:09:22.342 --rc geninfo_unexecuted_blocks=1 00:09:22.342 00:09:22.342 ' 00:09:22.342 08:09:27 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.342 08:09:27 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.342 08:09:27 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.342 08:09:27 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.342 08:09:27 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.342 08:09:27 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:22.342 08:09:27 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:22.342 08:09:27 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:22.342 08:09:27 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.342 08:09:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:22.342 08:09:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:22.342 08:09:27 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:22.342 08:09:27 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.861 Waiting for block devices as requested 00:09:22.861 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.861 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.121 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.121 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.459 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.459 08:09:33 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:28.459 08:09:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.459 08:09:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.459 08:09:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.459 08:09:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:28.459 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:28.460 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.461 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.462 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:28.463 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:28.464 08:09:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:28.464 08:09:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.464 08:09:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.465 08:09:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.465 08:09:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.465 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.466 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.467 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.468 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:28.469 08:09:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:28.470 08:09:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.470 08:09:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.470 08:09:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.470 08:09:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:28.470 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:28.471 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:28.472 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.473 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:28.474 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:28.475 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.476 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.738 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:28.739 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:28.740 08:09:33 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.740 08:09:33 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:28.740 08:09:33 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.740 08:09:33 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:28.740 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:28.741 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.742 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:28.743 08:09:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:28.743 08:09:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:28.744 08:09:33 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:28.744 08:09:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:28.744 08:09:33 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:28.744 08:09:33 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.880 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.880 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.880 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.880 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.880 08:09:34 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:29.880 08:09:34 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.880 08:09:34 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.880 08:09:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:29.880 ************************************ 00:09:29.880 START TEST nvme_simple_copy 00:09:29.880 ************************************ 00:09:29.880 08:09:34 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:30.139 Initializing NVMe Controllers 00:09:30.139 Attaching to 0000:00:10.0 00:09:30.139 Controller supports SCC. Attached to 0000:00:10.0 00:09:30.139 Namespace ID: 1 size: 6GB 00:09:30.139 Initialization complete. 00:09:30.139 00:09:30.139 Controller QEMU NVMe Ctrl (12340 ) 00:09:30.139 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:30.139 Namespace Block Size:4096 00:09:30.139 Writing LBAs 0 to 63 with Random Data 00:09:30.139 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:30.139 LBAs matching Written Data: 64 00:09:30.139 00:09:30.139 real 0m0.330s 00:09:30.139 user 0m0.138s 00:09:30.139 sys 0m0.090s 00:09:30.139 08:09:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.139 ************************************ 00:09:30.139 08:09:35 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.139 END TEST nvme_simple_copy 00:09:30.139 ************************************ 00:09:30.398 00:09:30.398 real 0m8.248s 00:09:30.398 user 0m1.453s 00:09:30.398 sys 0m1.661s 00:09:30.398 08:09:35 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.398 08:09:35 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:30.398 ************************************ 00:09:30.398 END TEST nvme_scc 00:09:30.398 ************************************ 00:09:30.398 08:09:35 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:30.398 08:09:35 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:30.398 08:09:35 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:30.398 08:09:35 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:30.398 08:09:35 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:30.398 08:09:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.398 08:09:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.398 08:09:35 -- common/autotest_common.sh@10 -- # set +x 00:09:30.398 ************************************ 00:09:30.398 START TEST nvme_fdp 00:09:30.398 ************************************ 00:09:30.398 08:09:35 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:30.398 * Looking for test storage... 00:09:30.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.398 08:09:35 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.398 08:09:35 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.398 08:09:35 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.398 08:09:35 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.398 08:09:35 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:30.657 08:09:35 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:30.658 08:09:35 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.658 08:09:35 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.658 --rc genhtml_branch_coverage=1 00:09:30.658 --rc genhtml_function_coverage=1 00:09:30.658 --rc genhtml_legend=1 00:09:30.658 --rc geninfo_all_blocks=1 00:09:30.658 --rc geninfo_unexecuted_blocks=1 00:09:30.658 00:09:30.658 ' 00:09:30.658 08:09:35 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.658 --rc genhtml_branch_coverage=1 00:09:30.658 --rc genhtml_function_coverage=1 00:09:30.658 --rc genhtml_legend=1 00:09:30.658 --rc geninfo_all_blocks=1 00:09:30.658 --rc geninfo_unexecuted_blocks=1 00:09:30.658 00:09:30.658 ' 00:09:30.658 08:09:35 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.658 --rc genhtml_branch_coverage=1 00:09:30.658 --rc genhtml_function_coverage=1 00:09:30.658 --rc genhtml_legend=1 00:09:30.658 --rc geninfo_all_blocks=1 00:09:30.658 --rc geninfo_unexecuted_blocks=1 00:09:30.658 00:09:30.658 ' 00:09:30.658 08:09:35 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.658 --rc genhtml_branch_coverage=1 00:09:30.658 --rc genhtml_function_coverage=1 00:09:30.658 --rc genhtml_legend=1 00:09:30.658 --rc geninfo_all_blocks=1 00:09:30.658 --rc geninfo_unexecuted_blocks=1 00:09:30.658 00:09:30.658 ' 00:09:30.658 08:09:35 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.658 08:09:35 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.658 08:09:35 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.658 08:09:35 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.658 08:09:35 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.658 08:09:35 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:30.658 08:09:35 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:30.658 08:09:35 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:30.658 08:09:35 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.658 08:09:35 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.176 Waiting for block devices as requested 00:09:31.176 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.176 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.435 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.435 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.714 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.714 08:09:41 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:36.714 08:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.714 08:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:36.714 08:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.714 08:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:36.714 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.715 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.716 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.717 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.718 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:36.719 08:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.719 08:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:36.719 08:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.719 08:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.719 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.720 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.721 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.722 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:36.723 08:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:36.723 08:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.723 08:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:36.723 08:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.723 08:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.724 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.725 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:36.726 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.990 08:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.991 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.992 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.993 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.994 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:36.995 08:09:41 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.995 08:09:41 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:36.995 08:09:41 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.995 08:09:41 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.995 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:36.996 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.997 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:36.998 08:09:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:36.998 08:09:41 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:36.999 08:09:41 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:36.999 08:09:41 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:36.999 08:09:41 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:36.999 08:09:41 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.394 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.394 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.394 08:09:43 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.394 08:09:43 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.394 08:09:43 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.394 08:09:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.394 ************************************ 00:09:38.394 START TEST nvme_flexible_data_placement 00:09:38.394 ************************************ 00:09:38.394 08:09:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.654 Initializing NVMe Controllers 00:09:38.654 Attaching to 0000:00:13.0 00:09:38.654 Controller supports FDP Attached to 0000:00:13.0 00:09:38.654 Namespace ID: 1 Endurance Group ID: 1 00:09:38.654 Initialization complete. 00:09:38.654 00:09:38.654 ================================== 00:09:38.654 == FDP tests for Namespace: #01 == 00:09:38.654 ================================== 00:09:38.654 00:09:38.654 Get Feature: FDP: 00:09:38.654 ================= 00:09:38.654 Enabled: Yes 00:09:38.654 FDP configuration Index: 0 00:09:38.654 00:09:38.654 FDP configurations log page 00:09:38.654 =========================== 00:09:38.654 Number of FDP configurations: 1 00:09:38.654 Version: 0 00:09:38.654 Size: 112 00:09:38.654 FDP Configuration Descriptor: 0 00:09:38.654 Descriptor Size: 96 00:09:38.654 Reclaim Group Identifier format: 2 00:09:38.654 FDP Volatile Write Cache: Not Present 00:09:38.654 FDP Configuration: Valid 00:09:38.654 Vendor Specific Size: 0 00:09:38.654 Number of Reclaim Groups: 2 00:09:38.654 Number of Recalim Unit Handles: 8 00:09:38.654 Max Placement Identifiers: 128 00:09:38.654 Number of Namespaces Suppprted: 256 00:09:38.654 Reclaim unit Nominal Size: 6000000 bytes 00:09:38.654 Estimated Reclaim Unit Time Limit: Not Reported 00:09:38.654 RUH Desc #000: RUH Type: Initially Isolated 00:09:38.654 RUH Desc #001: RUH Type: Initially Isolated 00:09:38.654 RUH Desc #002: RUH Type: Initially Isolated 00:09:38.654 RUH Desc #003: RUH Type: Initially Isolated 00:09:38.654 RUH Desc #004: RUH Type: Initially Isolated 00:09:38.654 RUH Desc #005: RUH Type: Initially Isolated 00:09:38.654 RUH Desc #006: RUH Type: Initially Isolated 00:09:38.654 RUH Desc #007: RUH Type: Initially Isolated 00:09:38.654 00:09:38.654 FDP reclaim unit handle usage log page 00:09:38.654 ====================================== 00:09:38.654 Number of Reclaim Unit Handles: 8 00:09:38.654 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:38.654 RUH Usage Desc #001: RUH Attributes: Unused 00:09:38.654 RUH Usage Desc #002: RUH Attributes: Unused 00:09:38.654 RUH Usage Desc #003: RUH Attributes: Unused 00:09:38.654 RUH Usage Desc #004: RUH Attributes: Unused 00:09:38.654 RUH Usage Desc #005: RUH Attributes: Unused 00:09:38.654 RUH Usage Desc #006: RUH Attributes: Unused 00:09:38.654 RUH Usage Desc #007: RUH Attributes: Unused 00:09:38.654 00:09:38.654 FDP statistics log page 00:09:38.654 ======================= 00:09:38.654 Host bytes with metadata written: 827789312 00:09:38.654 Media bytes with metadata written: 827953152 00:09:38.654 Media bytes erased: 0 00:09:38.654 00:09:38.654 FDP Reclaim unit handle status 00:09:38.654 ============================== 00:09:38.654 Number of RUHS descriptors: 2 00:09:38.654 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004a8f 00:09:38.654 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:38.654 00:09:38.654 FDP write on placement id: 0 success 00:09:38.654 00:09:38.654 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:38.654 00:09:38.654 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:38.654 00:09:38.654 Get Feature: FDP Events for Placement handle: #0 00:09:38.654 ======================== 00:09:38.654 Number of FDP Events: 6 00:09:38.654 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:38.654 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:38.654 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:38.654 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:38.654 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:38.654 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:38.654 00:09:38.654 FDP events log page 00:09:38.654 =================== 00:09:38.654 Number of FDP events: 1 00:09:38.654 FDP Event #0: 00:09:38.654 Event Type: RU Not Written to Capacity 00:09:38.654 Placement Identifier: Valid 00:09:38.654 NSID: Valid 00:09:38.654 Location: Valid 00:09:38.654 Placement Identifier: 0 00:09:38.654 Event Timestamp: 8 00:09:38.654 Namespace Identifier: 1 00:09:38.654 Reclaim Group Identifier: 0 00:09:38.654 Reclaim Unit Handle Identifier: 0 00:09:38.654 00:09:38.654 FDP test passed 00:09:38.654 00:09:38.654 real 0m0.307s 00:09:38.654 user 0m0.106s 00:09:38.654 sys 0m0.098s 00:09:38.654 08:09:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.654 ************************************ 00:09:38.654 END TEST nvme_flexible_data_placement 00:09:38.654 08:09:43 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:38.654 ************************************ 00:09:38.654 00:09:38.654 real 0m8.400s 00:09:38.654 user 0m1.539s 00:09:38.654 sys 0m1.751s 00:09:38.654 08:09:43 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.654 08:09:43 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.654 ************************************ 00:09:38.654 END TEST nvme_fdp 00:09:38.654 ************************************ 00:09:38.914 08:09:43 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:38.914 08:09:43 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.914 08:09:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.914 08:09:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.914 08:09:43 -- common/autotest_common.sh@10 -- # set +x 00:09:38.914 ************************************ 00:09:38.914 START TEST nvme_rpc 00:09:38.914 ************************************ 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.914 * Looking for test storage... 00:09:38.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.914 08:09:43 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.914 --rc genhtml_branch_coverage=1 00:09:38.914 --rc genhtml_function_coverage=1 00:09:38.914 --rc genhtml_legend=1 00:09:38.914 --rc geninfo_all_blocks=1 00:09:38.914 --rc geninfo_unexecuted_blocks=1 00:09:38.914 00:09:38.914 ' 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.914 --rc genhtml_branch_coverage=1 00:09:38.914 --rc genhtml_function_coverage=1 00:09:38.914 --rc genhtml_legend=1 00:09:38.914 --rc geninfo_all_blocks=1 00:09:38.914 --rc geninfo_unexecuted_blocks=1 00:09:38.914 00:09:38.914 ' 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.914 --rc genhtml_branch_coverage=1 00:09:38.914 --rc genhtml_function_coverage=1 00:09:38.914 --rc genhtml_legend=1 00:09:38.914 --rc geninfo_all_blocks=1 00:09:38.914 --rc geninfo_unexecuted_blocks=1 00:09:38.914 00:09:38.914 ' 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.914 --rc genhtml_branch_coverage=1 00:09:38.914 --rc genhtml_function_coverage=1 00:09:38.914 --rc genhtml_legend=1 00:09:38.914 --rc geninfo_all_blocks=1 00:09:38.914 --rc geninfo_unexecuted_blocks=1 00:09:38.914 00:09:38.914 ' 00:09:38.914 08:09:43 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.914 08:09:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:38.914 08:09:43 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:39.174 08:09:43 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:39.174 08:09:43 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66630 00:09:39.174 08:09:43 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:39.174 08:09:43 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:39.174 08:09:43 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66630 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 66630 ']' 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.174 08:09:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.174 [2024-11-17 08:09:44.102830] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:39.174 [2024-11-17 08:09:44.103009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66630 ] 00:09:39.433 [2024-11-17 08:09:44.292073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.433 [2024-11-17 08:09:44.418827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.433 [2024-11-17 08:09:44.418847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.369 08:09:45 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.369 08:09:45 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:40.369 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:40.369 Nvme0n1 00:09:40.627 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:40.627 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:40.886 request: 00:09:40.886 { 00:09:40.886 "bdev_name": "Nvme0n1", 00:09:40.886 "filename": "non_existing_file", 00:09:40.886 "method": "bdev_nvme_apply_firmware", 00:09:40.886 "req_id": 1 00:09:40.886 } 00:09:40.886 Got JSON-RPC error response 00:09:40.886 response: 00:09:40.886 { 00:09:40.886 "code": -32603, 00:09:40.886 "message": "open file failed." 00:09:40.886 } 00:09:40.886 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:40.886 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:40.886 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:40.886 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:40.886 08:09:45 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66630 00:09:40.886 08:09:45 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 66630 ']' 00:09:40.886 08:09:45 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 66630 00:09:40.886 08:09:45 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:40.886 08:09:45 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.886 08:09:45 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66630 00:09:41.144 killing process with pid 66630 00:09:41.144 08:09:45 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.144 08:09:45 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.144 08:09:45 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66630' 00:09:41.144 08:09:45 nvme_rpc -- common/autotest_common.sh@973 -- # kill 66630 00:09:41.144 08:09:45 nvme_rpc -- common/autotest_common.sh@978 -- # wait 66630 00:09:42.519 ************************************ 00:09:42.519 END TEST nvme_rpc 00:09:42.519 ************************************ 00:09:42.519 00:09:42.519 real 0m3.817s 00:09:42.519 user 0m7.297s 00:09:42.519 sys 0m0.612s 00:09:42.519 08:09:47 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.519 08:09:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.778 08:09:47 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.778 08:09:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.778 08:09:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.778 08:09:47 -- common/autotest_common.sh@10 -- # set +x 00:09:42.778 ************************************ 00:09:42.778 START TEST nvme_rpc_timeouts 00:09:42.778 ************************************ 00:09:42.778 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.778 * Looking for test storage... 00:09:42.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.778 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.778 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.778 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.778 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:42.778 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.779 08:09:47 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.779 --rc genhtml_branch_coverage=1 00:09:42.779 --rc genhtml_function_coverage=1 00:09:42.779 --rc genhtml_legend=1 00:09:42.779 --rc geninfo_all_blocks=1 00:09:42.779 --rc geninfo_unexecuted_blocks=1 00:09:42.779 00:09:42.779 ' 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.779 --rc genhtml_branch_coverage=1 00:09:42.779 --rc genhtml_function_coverage=1 00:09:42.779 --rc genhtml_legend=1 00:09:42.779 --rc geninfo_all_blocks=1 00:09:42.779 --rc geninfo_unexecuted_blocks=1 00:09:42.779 00:09:42.779 ' 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.779 --rc genhtml_branch_coverage=1 00:09:42.779 --rc genhtml_function_coverage=1 00:09:42.779 --rc genhtml_legend=1 00:09:42.779 --rc geninfo_all_blocks=1 00:09:42.779 --rc geninfo_unexecuted_blocks=1 00:09:42.779 00:09:42.779 ' 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.779 --rc genhtml_branch_coverage=1 00:09:42.779 --rc genhtml_function_coverage=1 00:09:42.779 --rc genhtml_legend=1 00:09:42.779 --rc geninfo_all_blocks=1 00:09:42.779 --rc geninfo_unexecuted_blocks=1 00:09:42.779 00:09:42.779 ' 00:09:42.779 08:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.779 08:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66700 00:09:42.779 08:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66700 00:09:42.779 08:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66732 00:09:42.779 08:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.779 08:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:42.779 08:09:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66732 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66732 ']' 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.779 08:09:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.046 [2024-11-17 08:09:47.856576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.046 [2024-11-17 08:09:47.856929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66732 ] 00:09:43.046 [2024-11-17 08:09:48.034109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.307 [2024-11-17 08:09:48.123123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.307 [2024-11-17 08:09:48.123144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.874 08:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.874 08:09:48 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:43.874 08:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.874 Checking default timeout settings: 00:09:43.874 08:09:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.441 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:44.441 Making settings changes with rpc: 00:09:44.441 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:44.698 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:44.698 Check default vs. modified settings: 00:09:44.698 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66700 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66700 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.956 Setting action_on_timeout is changed as expected. 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66700 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66700 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.956 Setting timeout_us is changed as expected. 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66700 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66700 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.956 Setting timeout_admin_us is changed as expected. 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66700 /tmp/settings_modified_66700 00:09:44.956 08:09:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66732 00:09:44.956 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66732 ']' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66732 00:09:44.956 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:44.956 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.956 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66732 00:09:45.215 killing process with pid 66732 00:09:45.215 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.215 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.215 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66732' 00:09:45.215 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66732 00:09:45.215 08:09:49 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66732 00:09:47.119 RPC TIMEOUT SETTING TEST PASSED. 00:09:47.119 08:09:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:47.119 ************************************ 00:09:47.119 END TEST nvme_rpc_timeouts 00:09:47.119 ************************************ 00:09:47.119 00:09:47.119 real 0m4.053s 00:09:47.119 user 0m8.125s 00:09:47.120 sys 0m0.584s 00:09:47.120 08:09:51 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.120 08:09:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:47.120 08:09:51 -- spdk/autotest.sh@239 -- # uname -s 00:09:47.120 08:09:51 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:47.120 08:09:51 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:47.120 08:09:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:47.120 08:09:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.120 08:09:51 -- common/autotest_common.sh@10 -- # set +x 00:09:47.120 ************************************ 00:09:47.120 START TEST sw_hotplug 00:09:47.120 ************************************ 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:47.120 * Looking for test storage... 00:09:47.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.120 08:09:51 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.120 --rc genhtml_branch_coverage=1 00:09:47.120 --rc genhtml_function_coverage=1 00:09:47.120 --rc genhtml_legend=1 00:09:47.120 --rc geninfo_all_blocks=1 00:09:47.120 --rc geninfo_unexecuted_blocks=1 00:09:47.120 00:09:47.120 ' 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.120 --rc genhtml_branch_coverage=1 00:09:47.120 --rc genhtml_function_coverage=1 00:09:47.120 --rc genhtml_legend=1 00:09:47.120 --rc geninfo_all_blocks=1 00:09:47.120 --rc geninfo_unexecuted_blocks=1 00:09:47.120 00:09:47.120 ' 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.120 --rc genhtml_branch_coverage=1 00:09:47.120 --rc genhtml_function_coverage=1 00:09:47.120 --rc genhtml_legend=1 00:09:47.120 --rc geninfo_all_blocks=1 00:09:47.120 --rc geninfo_unexecuted_blocks=1 00:09:47.120 00:09:47.120 ' 00:09:47.120 08:09:51 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.120 --rc genhtml_branch_coverage=1 00:09:47.120 --rc genhtml_function_coverage=1 00:09:47.120 --rc genhtml_legend=1 00:09:47.120 --rc geninfo_all_blocks=1 00:09:47.120 --rc geninfo_unexecuted_blocks=1 00:09:47.120 00:09:47.120 ' 00:09:47.120 08:09:51 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:47.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.380 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.380 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.380 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.380 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:47.639 08:09:52 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:47.639 08:09:52 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:47.639 08:09:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:47.639 08:09:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:47.639 08:09:52 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:47.640 08:09:52 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:47.640 08:09:52 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:47.640 08:09:52 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:47.640 08:09:52 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.159 Waiting for block devices as requested 00:09:48.159 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.159 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.418 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.418 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.691 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:53.692 08:09:58 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:53.692 08:09:58 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.951 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:53.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.951 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:54.518 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:54.518 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.518 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:54.778 08:09:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67602 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:54.778 08:09:59 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:54.778 08:09:59 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:54.778 08:09:59 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:54.778 08:09:59 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:54.778 08:09:59 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:54.778 08:09:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:55.038 Initializing NVMe Controllers 00:09:55.038 Attaching to 0000:00:10.0 00:09:55.038 Attaching to 0000:00:11.0 00:09:55.038 Attached to 0000:00:10.0 00:09:55.038 Attached to 0000:00:11.0 00:09:55.038 Initialization complete. Starting I/O... 00:09:55.038 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:55.038 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:55.038 00:09:55.974 QEMU NVMe Ctrl (12340 ): 1086 I/Os completed (+1086) 00:09:55.974 QEMU NVMe Ctrl (12341 ): 1157 I/Os completed (+1157) 00:09:55.974 00:09:57.351 QEMU NVMe Ctrl (12340 ): 2574 I/Os completed (+1488) 00:09:57.351 QEMU NVMe Ctrl (12341 ): 2689 I/Os completed (+1532) 00:09:57.351 00:09:58.289 QEMU NVMe Ctrl (12340 ): 4413 I/Os completed (+1839) 00:09:58.289 QEMU NVMe Ctrl (12341 ): 4576 I/Os completed (+1887) 00:09:58.289 00:09:59.226 QEMU NVMe Ctrl (12340 ): 6233 I/Os completed (+1820) 00:09:59.226 QEMU NVMe Ctrl (12341 ): 6437 I/Os completed (+1861) 00:09:59.226 00:10:00.211 QEMU NVMe Ctrl (12340 ): 8193 I/Os completed (+1960) 00:10:00.211 QEMU NVMe Ctrl (12341 ): 8417 I/Os completed (+1980) 00:10:00.212 00:10:00.780 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:00.780 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.780 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.780 [2024-11-17 08:10:05.716105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:00.780 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:00.780 [2024-11-17 08:10:05.717908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.718149] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.718186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.718213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.780 [2024-11-17 08:10:05.721128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.721219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.721241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.721260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:00.780 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:00.780 [2024-11-17 08:10:05.747442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:00.780 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:00.780 [2024-11-17 08:10:05.749162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.749417] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.749493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.749516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.780 [2024-11-17 08:10:05.751955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.752011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.752033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 [2024-11-17 08:10:05.752051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:00.780 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:00.780 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.039 08:10:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:01.039 Attaching to 0000:00:10.0 00:10:01.039 Attached to 0000:00:10.0 00:10:01.039 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:01.039 00:10:01.039 08:10:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:01.039 08:10:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.039 08:10:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:01.039 Attaching to 0000:00:11.0 00:10:01.298 Attached to 0000:00:11.0 00:10:02.236 QEMU NVMe Ctrl (12340 ): 1882 I/Os completed (+1882) 00:10:02.236 QEMU NVMe Ctrl (12341 ): 1728 I/Os completed (+1728) 00:10:02.236 00:10:03.174 QEMU NVMe Ctrl (12340 ): 3753 I/Os completed (+1871) 00:10:03.174 QEMU NVMe Ctrl (12341 ): 3656 I/Os completed (+1928) 00:10:03.174 00:10:04.112 QEMU NVMe Ctrl (12340 ): 5841 I/Os completed (+2088) 00:10:04.112 QEMU NVMe Ctrl (12341 ): 5786 I/Os completed (+2130) 00:10:04.112 00:10:05.060 QEMU NVMe Ctrl (12340 ): 7803 I/Os completed (+1962) 00:10:05.060 QEMU NVMe Ctrl (12341 ): 7778 I/Os completed (+1992) 00:10:05.061 00:10:05.998 QEMU NVMe Ctrl (12340 ): 9827 I/Os completed (+2024) 00:10:05.998 QEMU NVMe Ctrl (12341 ): 9838 I/Os completed (+2060) 00:10:05.998 00:10:07.378 QEMU NVMe Ctrl (12340 ): 11835 I/Os completed (+2008) 00:10:07.378 QEMU NVMe Ctrl (12341 ): 11870 I/Os completed (+2032) 00:10:07.378 00:10:08.315 QEMU NVMe Ctrl (12340 ): 13803 I/Os completed (+1968) 00:10:08.315 QEMU NVMe Ctrl (12341 ): 13863 I/Os completed (+1993) 00:10:08.315 00:10:09.253 QEMU NVMe Ctrl (12340 ): 15739 I/Os completed (+1936) 00:10:09.253 QEMU NVMe Ctrl (12341 ): 15827 I/Os completed (+1964) 00:10:09.253 00:10:10.190 QEMU NVMe Ctrl (12340 ): 17699 I/Os completed (+1960) 00:10:10.190 QEMU NVMe Ctrl (12341 ): 17816 I/Os completed (+1989) 00:10:10.190 00:10:11.127 QEMU NVMe Ctrl (12340 ): 19648 I/Os completed (+1949) 00:10:11.127 QEMU NVMe Ctrl (12341 ): 19802 I/Os completed (+1986) 00:10:11.127 00:10:12.065 QEMU NVMe Ctrl (12340 ): 21712 I/Os completed (+2064) 00:10:12.065 QEMU NVMe Ctrl (12341 ): 21873 I/Os completed (+2071) 00:10:12.065 00:10:13.003 QEMU NVMe Ctrl (12340 ): 23784 I/Os completed (+2072) 00:10:13.003 QEMU NVMe Ctrl (12341 ): 23951 I/Os completed (+2078) 00:10:13.003 00:10:13.262 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:13.262 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:13.262 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.262 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.262 [2024-11-17 08:10:18.056129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:13.262 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:13.262 [2024-11-17 08:10:18.058046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.058295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.058366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.058516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.262 [2024-11-17 08:10:18.062048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.062256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.062425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.062598] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.262 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.262 [2024-11-17 08:10:18.084418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:13.262 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:13.262 [2024-11-17 08:10:18.086854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.086999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.087069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.087246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.262 [2024-11-17 08:10:18.090600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.090830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.091001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.262 [2024-11-17 08:10:18.091178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.263 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:13.263 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:13.263 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.263 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.263 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:13.263 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:13.522 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.522 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.522 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.522 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:13.522 Attaching to 0000:00:10.0 00:10:13.522 Attached to 0000:00:10.0 00:10:13.522 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:13.522 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.522 08:10:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.522 Attaching to 0000:00:11.0 00:10:13.522 Attached to 0000:00:11.0 00:10:14.090 QEMU NVMe Ctrl (12340 ): 1290 I/Os completed (+1290) 00:10:14.090 QEMU NVMe Ctrl (12341 ): 1140 I/Os completed (+1140) 00:10:14.090 00:10:15.034 QEMU NVMe Ctrl (12340 ): 3239 I/Os completed (+1949) 00:10:15.034 QEMU NVMe Ctrl (12341 ): 3109 I/Os completed (+1969) 00:10:15.034 00:10:15.970 QEMU NVMe Ctrl (12340 ): 5187 I/Os completed (+1948) 00:10:15.971 QEMU NVMe Ctrl (12341 ): 5121 I/Os completed (+2012) 00:10:15.971 00:10:17.349 QEMU NVMe Ctrl (12340 ): 7151 I/Os completed (+1964) 00:10:17.349 QEMU NVMe Ctrl (12341 ): 7121 I/Os completed (+2000) 00:10:17.349 00:10:18.287 QEMU NVMe Ctrl (12340 ): 9215 I/Os completed (+2064) 00:10:18.287 QEMU NVMe Ctrl (12341 ): 9219 I/Os completed (+2098) 00:10:18.287 00:10:19.225 QEMU NVMe Ctrl (12340 ): 11179 I/Os completed (+1964) 00:10:19.225 QEMU NVMe Ctrl (12341 ): 11240 I/Os completed (+2021) 00:10:19.225 00:10:20.162 QEMU NVMe Ctrl (12340 ): 13227 I/Os completed (+2048) 00:10:20.162 QEMU NVMe Ctrl (12341 ): 13315 I/Os completed (+2075) 00:10:20.162 00:10:21.100 QEMU NVMe Ctrl (12340 ): 15183 I/Os completed (+1956) 00:10:21.100 QEMU NVMe Ctrl (12341 ): 15308 I/Os completed (+1993) 00:10:21.100 00:10:22.038 QEMU NVMe Ctrl (12340 ): 17247 I/Os completed (+2064) 00:10:22.038 QEMU NVMe Ctrl (12341 ): 17441 I/Os completed (+2133) 00:10:22.038 00:10:22.976 QEMU NVMe Ctrl (12340 ): 19327 I/Os completed (+2080) 00:10:22.976 QEMU NVMe Ctrl (12341 ): 19563 I/Os completed (+2122) 00:10:22.976 00:10:24.356 QEMU NVMe Ctrl (12340 ): 21391 I/Os completed (+2064) 00:10:24.356 QEMU NVMe Ctrl (12341 ): 21653 I/Os completed (+2090) 00:10:24.356 00:10:25.296 QEMU NVMe Ctrl (12340 ): 23395 I/Os completed (+2004) 00:10:25.296 QEMU NVMe Ctrl (12341 ): 23673 I/Os completed (+2020) 00:10:25.296 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.556 [2024-11-17 08:10:30.381975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:25.556 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:25.556 [2024-11-17 08:10:30.384012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.384071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.384122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.384145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.556 [2024-11-17 08:10:30.387263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.387345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.387372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.387392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.556 [2024-11-17 08:10:30.410144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:25.556 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:25.556 [2024-11-17 08:10:30.412078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.412316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.412520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.412552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.556 [2024-11-17 08:10:30.415033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.415071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.415140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 [2024-11-17 08:10:30.415159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.556 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:25.556 EAL: Scan for (pci) bus failed. 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.556 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.815 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.815 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.815 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.815 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.815 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:25.815 Attaching to 0000:00:10.0 00:10:25.815 Attached to 0000:00:10.0 00:10:25.816 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:25.816 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.816 08:10:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:25.816 Attaching to 0000:00:11.0 00:10:25.816 Attached to 0000:00:11.0 00:10:25.816 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:25.816 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:25.816 [2024-11-17 08:10:30.710875] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:38.021 08:10:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:38.021 08:10:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.021 08:10:42 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.99 00:10:38.021 08:10:42 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.99 00:10:38.021 08:10:42 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:38.021 08:10:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.99 00:10:38.021 08:10:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.99 2 00:10:38.021 remove_attach_helper took 42.99s to complete (handling 2 nvme drive(s)) 08:10:42 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67602 00:10:44.642 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67602) - No such process 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67602 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68142 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:44.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.642 08:10:48 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68142 00:10:44.642 08:10:48 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68142 ']' 00:10:44.642 08:10:48 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.642 08:10:48 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.642 08:10:48 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.642 08:10:48 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.642 08:10:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.642 [2024-11-17 08:10:48.854740] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:44.642 [2024-11-17 08:10:48.855657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68142 ] 00:10:44.642 [2024-11-17 08:10:49.051163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.642 [2024-11-17 08:10:49.175990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.901 08:10:49 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:44.902 08:10:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:44.902 08:10:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.472 08:10:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.472 08:10:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.472 08:10:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.472 [2024-11-17 08:10:55.914525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.472 [2024-11-17 08:10:55.917141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:55.917193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:55.917216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 [2024-11-17 08:10:55.917243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:55.917257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:55.917271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 [2024-11-17 08:10:55.917284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:55.917298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:55.917309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 [2024-11-17 08:10:55.917327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:55.917340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:55.917353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:51.472 08:10:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.472 [2024-11-17 08:10:56.314492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.472 [2024-11-17 08:10:56.317053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:56.317140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:56.317163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 [2024-11-17 08:10:56.317183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:56.317199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:56.317212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 [2024-11-17 08:10:56.317242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:56.317255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:56.317268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 [2024-11-17 08:10:56.317281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.472 [2024-11-17 08:10:56.317295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.472 [2024-11-17 08:10:56.317307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.472 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:51.472 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.472 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.472 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.472 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.472 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.472 08:10:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.472 08:10:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.472 08:10:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.731 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.731 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.731 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.731 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.731 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.731 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.731 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.732 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.732 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.732 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.990 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.990 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.990 08:10:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.199 08:11:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.199 08:11:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.199 08:11:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.199 08:11:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.199 08:11:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.199 08:11:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.199 [2024-11-17 08:11:08.914708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.199 [2024-11-17 08:11:08.917684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.199 [2024-11-17 08:11:08.917774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.199 [2024-11-17 08:11:08.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.199 [2024-11-17 08:11:08.917864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.199 [2024-11-17 08:11:08.917886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.199 [2024-11-17 08:11:08.917902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.199 [2024-11-17 08:11:08.917917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.199 [2024-11-17 08:11:08.917932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.199 [2024-11-17 08:11:08.917945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.199 [2024-11-17 08:11:08.917960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.199 [2024-11-17 08:11:08.917972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.199 [2024-11-17 08:11:08.917986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:04.199 08:11:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.458 [2024-11-17 08:11:09.314609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.458 [2024-11-17 08:11:09.316978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.458 [2024-11-17 08:11:09.317022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.458 [2024-11-17 08:11:09.317045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.458 [2024-11-17 08:11:09.317062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.458 [2024-11-17 08:11:09.317093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.458 [2024-11-17 08:11:09.317125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.458 [2024-11-17 08:11:09.317142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.458 [2024-11-17 08:11:09.317155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.458 [2024-11-17 08:11:09.317168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.458 [2024-11-17 08:11:09.317181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.458 [2024-11-17 08:11:09.317209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.458 [2024-11-17 08:11:09.317221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.458 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:04.458 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.458 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.458 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.458 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.458 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.458 08:11:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.458 08:11:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.458 08:11:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.716 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.975 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.975 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.975 08:11:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.188 08:11:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.188 08:11:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.188 08:11:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.188 08:11:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.188 08:11:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.188 [2024-11-17 08:11:21.914782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:17.188 [2024-11-17 08:11:21.917526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.188 [2024-11-17 08:11:21.917705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.188 [2024-11-17 08:11:21.917877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.188 [2024-11-17 08:11:21.918254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.188 [2024-11-17 08:11:21.918380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.188 [2024-11-17 08:11:21.918564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.188 [2024-11-17 08:11:21.918752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.188 [2024-11-17 08:11:21.918955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.188 [2024-11-17 08:11:21.919140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.188 [2024-11-17 08:11:21.919409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.188 [2024-11-17 08:11:21.919587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.188 [2024-11-17 08:11:21.919776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.188 08:11:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:17.188 08:11:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:17.448 [2024-11-17 08:11:22.414737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.448 [2024-11-17 08:11:22.417068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.448 [2024-11-17 08:11:22.417279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.448 [2024-11-17 08:11:22.417433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.448 [2024-11-17 08:11:22.417661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.448 [2024-11-17 08:11:22.417790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.448 [2024-11-17 08:11:22.417941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.448 [2024-11-17 08:11:22.418120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.448 [2024-11-17 08:11:22.418334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.448 [2024-11-17 08:11:22.418516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.448 [2024-11-17 08:11:22.418731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.448 [2024-11-17 08:11:22.418940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.448 [2024-11-17 08:11:22.419141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.448 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:17.448 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.448 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.448 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.448 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.448 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.448 08:11:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.448 08:11:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.707 08:11:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.707 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.966 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.966 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.966 08:11:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.04 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.04 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.04 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.04 2 00:11:30.178 remove_attach_helper took 45.04s to complete (handling 2 nvme drive(s)) 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:30.178 08:11:34 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:30.178 08:11:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.745 08:11:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.745 08:11:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.745 08:11:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.745 [2024-11-17 08:11:40.984997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.745 [2024-11-17 08:11:40.987268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:40.987312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:40.987371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 [2024-11-17 08:11:40.987400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:40.987415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:40.987429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 [2024-11-17 08:11:40.987444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:40.987458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:40.987470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 [2024-11-17 08:11:40.987499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:40.987511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:40.987527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.745 08:11:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.745 [2024-11-17 08:11:41.384978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.745 [2024-11-17 08:11:41.388822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:41.388865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:41.388885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 [2024-11-17 08:11:41.388902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:41.388917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:41.388928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 [2024-11-17 08:11:41.388943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:41.388954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:41.388967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 [2024-11-17 08:11:41.388979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.745 [2024-11-17 08:11:41.388992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.745 [2024-11-17 08:11:41.389002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.745 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.745 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.745 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.745 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.745 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.745 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.745 08:11:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.745 08:11:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.746 08:11:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.746 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.746 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.746 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.746 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.746 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.746 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.005 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.005 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.005 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.005 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.005 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.005 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.005 08:11:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.213 08:11:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.213 08:11:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.213 08:11:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.213 08:11:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.213 08:11:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.213 08:11:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.213 [2024-11-17 08:11:53.985148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:49.213 [2024-11-17 08:11:53.987856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.213 [2024-11-17 08:11:53.988069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.213 [2024-11-17 08:11:53.988373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.213 [2024-11-17 08:11:53.988654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.213 [2024-11-17 08:11:53.988769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.213 [2024-11-17 08:11:53.988898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.213 [2024-11-17 08:11:53.989142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.213 [2024-11-17 08:11:53.989275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.213 [2024-11-17 08:11:53.989417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.213 [2024-11-17 08:11:53.989612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.213 [2024-11-17 08:11:53.989665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.213 [2024-11-17 08:11:53.989844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:49.213 08:11:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.782 [2024-11-17 08:11:54.485093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.782 [2024-11-17 08:11:54.486942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.782 [2024-11-17 08:11:54.487132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.782 [2024-11-17 08:11:54.487281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.782 [2024-11-17 08:11:54.487610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.782 [2024-11-17 08:11:54.487717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.782 [2024-11-17 08:11:54.487942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.782 [2024-11-17 08:11:54.488008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.782 [2024-11-17 08:11:54.488049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.782 [2024-11-17 08:11:54.488282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.782 [2024-11-17 08:11:54.488350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.782 [2024-11-17 08:11:54.488471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.782 [2024-11-17 08:11:54.488572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.782 08:11:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.782 08:11:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.782 08:11:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.782 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:50.041 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:50.041 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.041 08:11:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.378 08:12:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.378 08:12:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.378 08:12:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.378 08:12:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.378 08:12:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.378 08:12:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.378 [2024-11-17 08:12:06.985243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:02.378 [2024-11-17 08:12:06.987747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.378 [2024-11-17 08:12:06.987812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.378 [2024-11-17 08:12:06.987833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.378 [2024-11-17 08:12:06.987859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.378 [2024-11-17 08:12:06.987873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.378 [2024-11-17 08:12:06.987887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.378 [2024-11-17 08:12:06.987900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.378 [2024-11-17 08:12:06.987916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.378 [2024-11-17 08:12:06.987928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.378 [2024-11-17 08:12:06.987942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.378 [2024-11-17 08:12:06.987953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.378 [2024-11-17 08:12:06.987966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:02.378 08:12:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:02.378 [2024-11-17 08:12:07.385246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:02.637 [2024-11-17 08:12:07.389669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.637 [2024-11-17 08:12:07.389711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.637 [2024-11-17 08:12:07.389747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.637 [2024-11-17 08:12:07.389766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.637 [2024-11-17 08:12:07.389782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.637 [2024-11-17 08:12:07.389794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.637 [2024-11-17 08:12:07.389808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.637 [2024-11-17 08:12:07.389820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.637 [2024-11-17 08:12:07.389833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.637 [2024-11-17 08:12:07.389845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.637 [2024-11-17 08:12:07.389860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.637 [2024-11-17 08:12:07.389872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.637 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:02.637 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.637 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.637 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.637 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.637 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.637 08:12:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.637 08:12:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.637 08:12:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.637 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:02.638 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.896 08:12:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.00 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.00 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.00 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.00 2 00:12:15.103 remove_attach_helper took 45.00s to complete (handling 2 nvme drive(s)) 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:15.103 08:12:19 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68142 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68142 ']' 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68142 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68142 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68142' 00:12:15.103 killing process with pid 68142 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68142 00:12:15.103 08:12:19 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68142 00:12:17.009 08:12:21 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:17.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.577 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:17.577 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:17.577 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.577 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.577 00:12:17.577 real 2m30.899s 00:12:17.577 user 1m51.205s 00:12:17.577 sys 0m19.343s 00:12:17.577 08:12:22 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.577 08:12:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.577 ************************************ 00:12:17.577 END TEST sw_hotplug 00:12:17.577 ************************************ 00:12:17.837 08:12:22 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:17.837 08:12:22 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:17.837 08:12:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.837 08:12:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.837 08:12:22 -- common/autotest_common.sh@10 -- # set +x 00:12:17.837 ************************************ 00:12:17.837 START TEST nvme_xnvme 00:12:17.837 ************************************ 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:17.837 * Looking for test storage... 00:12:17.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.837 --rc genhtml_branch_coverage=1 00:12:17.837 --rc genhtml_function_coverage=1 00:12:17.837 --rc genhtml_legend=1 00:12:17.837 --rc geninfo_all_blocks=1 00:12:17.837 --rc geninfo_unexecuted_blocks=1 00:12:17.837 00:12:17.837 ' 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.837 --rc genhtml_branch_coverage=1 00:12:17.837 --rc genhtml_function_coverage=1 00:12:17.837 --rc genhtml_legend=1 00:12:17.837 --rc geninfo_all_blocks=1 00:12:17.837 --rc geninfo_unexecuted_blocks=1 00:12:17.837 00:12:17.837 ' 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.837 --rc genhtml_branch_coverage=1 00:12:17.837 --rc genhtml_function_coverage=1 00:12:17.837 --rc genhtml_legend=1 00:12:17.837 --rc geninfo_all_blocks=1 00:12:17.837 --rc geninfo_unexecuted_blocks=1 00:12:17.837 00:12:17.837 ' 00:12:17.837 08:12:22 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.837 --rc genhtml_branch_coverage=1 00:12:17.837 --rc genhtml_function_coverage=1 00:12:17.837 --rc genhtml_legend=1 00:12:17.837 --rc geninfo_all_blocks=1 00:12:17.837 --rc geninfo_unexecuted_blocks=1 00:12:17.837 00:12:17.837 ' 00:12:17.837 08:12:22 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.837 08:12:22 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.837 08:12:22 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.837 08:12:22 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.837 08:12:22 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.837 08:12:22 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.837 08:12:22 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.838 08:12:22 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:17.838 08:12:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.838 08:12:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.838 08:12:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:17.838 ************************************ 00:12:17.838 START TEST xnvme_to_malloc_dd_copy 00:12:17.838 ************************************ 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:17.838 08:12:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:18.098 { 00:12:18.098 "subsystems": [ 00:12:18.098 { 00:12:18.098 "subsystem": "bdev", 00:12:18.098 "config": [ 00:12:18.098 { 00:12:18.098 "params": { 00:12:18.098 "block_size": 512, 00:12:18.098 "num_blocks": 2097152, 00:12:18.098 "name": "malloc0" 00:12:18.098 }, 00:12:18.098 "method": "bdev_malloc_create" 00:12:18.098 }, 00:12:18.098 { 00:12:18.098 "params": { 00:12:18.098 "io_mechanism": "libaio", 00:12:18.098 "filename": "/dev/nullb0", 00:12:18.098 "name": "null0" 00:12:18.098 }, 00:12:18.098 "method": "bdev_xnvme_create" 00:12:18.098 }, 00:12:18.098 { 00:12:18.098 "method": "bdev_wait_for_examine" 00:12:18.098 } 00:12:18.098 ] 00:12:18.098 } 00:12:18.098 ] 00:12:18.098 } 00:12:18.098 [2024-11-17 08:12:22.948047] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:18.098 [2024-11-17 08:12:22.948403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69501 ] 00:12:18.357 [2024-11-17 08:12:23.135318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.357 [2024-11-17 08:12:23.260944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.263  [2024-11-17T08:12:26.652Z] Copying: 195/1024 [MB] (195 MBps) [2024-11-17T08:12:27.590Z] Copying: 393/1024 [MB] (198 MBps) [2024-11-17T08:12:28.527Z] Copying: 591/1024 [MB] (197 MBps) [2024-11-17T08:12:29.462Z] Copying: 788/1024 [MB] (197 MBps) [2024-11-17T08:12:29.462Z] Copying: 986/1024 [MB] (197 MBps) [2024-11-17T08:12:32.750Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:12:27.738 00:12:27.738 08:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:27.738 08:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:27.738 08:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:27.738 08:12:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:27.738 { 00:12:27.738 "subsystems": [ 00:12:27.738 { 00:12:27.738 "subsystem": "bdev", 00:12:27.738 "config": [ 00:12:27.738 { 00:12:27.738 "params": { 00:12:27.738 "block_size": 512, 00:12:27.738 "num_blocks": 2097152, 00:12:27.738 "name": "malloc0" 00:12:27.738 }, 00:12:27.738 "method": "bdev_malloc_create" 00:12:27.738 }, 00:12:27.738 { 00:12:27.738 "params": { 00:12:27.738 "io_mechanism": "libaio", 00:12:27.738 "filename": "/dev/nullb0", 00:12:27.738 "name": "null0" 00:12:27.738 }, 00:12:27.738 "method": "bdev_xnvme_create" 00:12:27.738 }, 00:12:27.738 { 00:12:27.738 "method": "bdev_wait_for_examine" 00:12:27.738 } 00:12:27.738 ] 00:12:27.738 } 00:12:27.738 ] 00:12:27.738 } 00:12:27.738 [2024-11-17 08:12:32.196715] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:27.738 [2024-11-17 08:12:32.196836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69612 ] 00:12:27.738 [2024-11-17 08:12:32.355050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.738 [2024-11-17 08:12:32.436951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.642  [2024-11-17T08:12:35.591Z] Copying: 214/1024 [MB] (214 MBps) [2024-11-17T08:12:36.528Z] Copying: 429/1024 [MB] (214 MBps) [2024-11-17T08:12:37.466Z] Copying: 644/1024 [MB] (215 MBps) [2024-11-17T08:12:38.403Z] Copying: 857/1024 [MB] (212 MBps) [2024-11-17T08:12:40.938Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:12:35.926 00:12:35.926 08:12:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:35.926 08:12:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:35.926 08:12:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:35.926 08:12:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:35.926 08:12:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:35.926 08:12:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:36.185 { 00:12:36.185 "subsystems": [ 00:12:36.185 { 00:12:36.185 "subsystem": "bdev", 00:12:36.185 "config": [ 00:12:36.185 { 00:12:36.185 "params": { 00:12:36.185 "block_size": 512, 00:12:36.185 "num_blocks": 2097152, 00:12:36.185 "name": "malloc0" 00:12:36.185 }, 00:12:36.185 "method": "bdev_malloc_create" 00:12:36.185 }, 00:12:36.185 { 00:12:36.185 "params": { 00:12:36.185 "io_mechanism": "io_uring", 00:12:36.185 "filename": "/dev/nullb0", 00:12:36.185 "name": "null0" 00:12:36.185 }, 00:12:36.185 "method": "bdev_xnvme_create" 00:12:36.185 }, 00:12:36.185 { 00:12:36.185 "method": "bdev_wait_for_examine" 00:12:36.185 } 00:12:36.185 ] 00:12:36.185 } 00:12:36.185 ] 00:12:36.185 } 00:12:36.185 [2024-11-17 08:12:40.975782] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:36.185 [2024-11-17 08:12:40.975945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69705 ] 00:12:36.185 [2024-11-17 08:12:41.153605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.444 [2024-11-17 08:12:41.235755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.350  [2024-11-17T08:12:44.299Z] Copying: 232/1024 [MB] (232 MBps) [2024-11-17T08:12:45.236Z] Copying: 463/1024 [MB] (230 MBps) [2024-11-17T08:12:46.614Z] Copying: 694/1024 [MB] (231 MBps) [2024-11-17T08:12:46.873Z] Copying: 923/1024 [MB] (229 MBps) [2024-11-17T08:12:49.460Z] Copying: 1024/1024 [MB] (average 230 MBps) 00:12:44.448 00:12:44.448 08:12:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:44.448 08:12:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:44.448 08:12:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:44.448 08:12:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:44.448 { 00:12:44.448 "subsystems": [ 00:12:44.448 { 00:12:44.448 "subsystem": "bdev", 00:12:44.448 "config": [ 00:12:44.448 { 00:12:44.448 "params": { 00:12:44.448 "block_size": 512, 00:12:44.448 "num_blocks": 2097152, 00:12:44.448 "name": "malloc0" 00:12:44.448 }, 00:12:44.448 "method": "bdev_malloc_create" 00:12:44.448 }, 00:12:44.448 { 00:12:44.448 "params": { 00:12:44.448 "io_mechanism": "io_uring", 00:12:44.448 "filename": "/dev/nullb0", 00:12:44.448 "name": "null0" 00:12:44.448 }, 00:12:44.448 "method": "bdev_xnvme_create" 00:12:44.448 }, 00:12:44.448 { 00:12:44.448 "method": "bdev_wait_for_examine" 00:12:44.448 } 00:12:44.448 ] 00:12:44.448 } 00:12:44.448 ] 00:12:44.448 } 00:12:44.448 [2024-11-17 08:12:49.377227] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:44.448 [2024-11-17 08:12:49.377369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69809 ] 00:12:44.707 [2024-11-17 08:12:49.538928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.707 [2024-11-17 08:12:49.619616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.615  [2024-11-17T08:12:53.007Z] Copying: 229/1024 [MB] (229 MBps) [2024-11-17T08:12:53.944Z] Copying: 455/1024 [MB] (226 MBps) [2024-11-17T08:12:54.881Z] Copying: 683/1024 [MB] (227 MBps) [2024-11-17T08:12:55.140Z] Copying: 910/1024 [MB] (226 MBps) [2024-11-17T08:12:58.432Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:12:53.420 00:12:53.420 08:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:53.420 08:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:53.420 00:12:53.420 real 0m34.989s 00:12:53.420 user 0m30.138s 00:12:53.420 sys 0m4.357s 00:12:53.420 08:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.420 ************************************ 00:12:53.420 END TEST xnvme_to_malloc_dd_copy 00:12:53.420 ************************************ 00:12:53.420 08:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.420 08:12:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:53.420 08:12:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.420 08:12:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.420 08:12:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.420 ************************************ 00:12:53.420 START TEST xnvme_bdevperf 00:12:53.420 ************************************ 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:53.420 08:12:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:53.420 { 00:12:53.420 "subsystems": [ 00:12:53.420 { 00:12:53.420 "subsystem": "bdev", 00:12:53.420 "config": [ 00:12:53.420 { 00:12:53.420 "params": { 00:12:53.420 "io_mechanism": "libaio", 00:12:53.420 "filename": "/dev/nullb0", 00:12:53.420 "name": "null0" 00:12:53.420 }, 00:12:53.420 "method": "bdev_xnvme_create" 00:12:53.420 }, 00:12:53.420 { 00:12:53.420 "method": "bdev_wait_for_examine" 00:12:53.420 } 00:12:53.420 ] 00:12:53.420 } 00:12:53.420 ] 00:12:53.420 } 00:12:53.420 [2024-11-17 08:12:57.988906] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:53.420 [2024-11-17 08:12:57.989072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69930 ] 00:12:53.420 [2024-11-17 08:12:58.166601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.420 [2024-11-17 08:12:58.250951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.680 Running I/O for 5 seconds... 00:12:55.553 145728.00 IOPS, 569.25 MiB/s [2024-11-17T08:13:01.942Z] 145088.00 IOPS, 566.75 MiB/s [2024-11-17T08:13:02.880Z] 145173.33 IOPS, 567.08 MiB/s [2024-11-17T08:13:03.816Z] 145600.00 IOPS, 568.75 MiB/s 00:12:58.804 Latency(us) 00:12:58.804 [2024-11-17T08:13:03.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.805 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:58.805 null0 : 5.00 145727.22 569.25 0.00 0.00 436.55 118.23 2383.13 00:12:58.805 [2024-11-17T08:13:03.817Z] =================================================================================================================== 00:12:58.805 [2024-11-17T08:13:03.817Z] Total : 145727.22 569.25 0.00 0.00 436.55 118.23 2383.13 00:12:59.374 08:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:59.374 08:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:59.374 08:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:59.374 08:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:59.374 08:13:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:59.374 08:13:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:59.374 { 00:12:59.374 "subsystems": [ 00:12:59.374 { 00:12:59.374 "subsystem": "bdev", 00:12:59.374 "config": [ 00:12:59.374 { 00:12:59.374 "params": { 00:12:59.374 "io_mechanism": "io_uring", 00:12:59.374 "filename": "/dev/nullb0", 00:12:59.374 "name": "null0" 00:12:59.374 }, 00:12:59.374 "method": "bdev_xnvme_create" 00:12:59.374 }, 00:12:59.374 { 00:12:59.374 "method": "bdev_wait_for_examine" 00:12:59.374 } 00:12:59.374 ] 00:12:59.374 } 00:12:59.374 ] 00:12:59.374 } 00:12:59.633 [2024-11-17 08:13:04.424380] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:59.633 [2024-11-17 08:13:04.424549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70000 ] 00:12:59.633 [2024-11-17 08:13:04.595972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.893 [2024-11-17 08:13:04.684829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.152 Running I/O for 5 seconds... 00:13:02.023 195264.00 IOPS, 762.75 MiB/s [2024-11-17T08:13:07.972Z] 194720.00 IOPS, 760.62 MiB/s [2024-11-17T08:13:09.349Z] 194965.33 IOPS, 761.58 MiB/s [2024-11-17T08:13:10.285Z] 194832.00 IOPS, 761.06 MiB/s 00:13:05.273 Latency(us) 00:13:05.273 [2024-11-17T08:13:10.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.273 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:05.273 null0 : 5.00 194646.47 760.34 0.00 0.00 326.25 185.25 1742.66 00:13:05.273 [2024-11-17T08:13:10.285Z] =================================================================================================================== 00:13:05.273 [2024-11-17T08:13:10.285Z] Total : 194646.47 760.34 0.00 0.00 326.25 185.25 1742.66 00:13:05.842 08:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:05.842 08:13:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:05.842 ************************************ 00:13:05.842 END TEST xnvme_bdevperf 00:13:05.842 ************************************ 00:13:05.842 00:13:05.842 real 0m12.879s 00:13:05.842 user 0m9.795s 00:13:05.842 sys 0m2.871s 00:13:05.842 08:13:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.842 08:13:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:05.842 ************************************ 00:13:05.842 END TEST nvme_xnvme 00:13:05.842 ************************************ 00:13:05.842 00:13:05.842 real 0m48.169s 00:13:05.842 user 0m40.089s 00:13:05.842 sys 0m7.361s 00:13:05.842 08:13:10 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.842 08:13:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:05.842 08:13:10 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:05.842 08:13:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:05.842 08:13:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.842 08:13:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.842 ************************************ 00:13:05.842 START TEST blockdev_xnvme 00:13:05.842 ************************************ 00:13:05.842 08:13:10 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:06.102 * Looking for test storage... 00:13:06.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:06.102 08:13:10 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.102 08:13:10 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.102 08:13:10 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.102 08:13:11 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.102 --rc genhtml_branch_coverage=1 00:13:06.102 --rc genhtml_function_coverage=1 00:13:06.102 --rc genhtml_legend=1 00:13:06.102 --rc geninfo_all_blocks=1 00:13:06.102 --rc geninfo_unexecuted_blocks=1 00:13:06.102 00:13:06.102 ' 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.102 --rc genhtml_branch_coverage=1 00:13:06.102 --rc genhtml_function_coverage=1 00:13:06.102 --rc genhtml_legend=1 00:13:06.102 --rc geninfo_all_blocks=1 00:13:06.102 --rc geninfo_unexecuted_blocks=1 00:13:06.102 00:13:06.102 ' 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.102 --rc genhtml_branch_coverage=1 00:13:06.102 --rc genhtml_function_coverage=1 00:13:06.102 --rc genhtml_legend=1 00:13:06.102 --rc geninfo_all_blocks=1 00:13:06.102 --rc geninfo_unexecuted_blocks=1 00:13:06.102 00:13:06.102 ' 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:06.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.102 --rc genhtml_branch_coverage=1 00:13:06.102 --rc genhtml_function_coverage=1 00:13:06.102 --rc genhtml_legend=1 00:13:06.102 --rc geninfo_all_blocks=1 00:13:06.102 --rc geninfo_unexecuted_blocks=1 00:13:06.102 00:13:06.102 ' 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70150 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:06.102 08:13:11 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70150 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 70150 ']' 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.102 08:13:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.361 [2024-11-17 08:13:11.174384] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:06.362 [2024-11-17 08:13:11.174566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70150 ] 00:13:06.362 [2024-11-17 08:13:11.355017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.621 [2024-11-17 08:13:11.442461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.190 08:13:12 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.190 08:13:12 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:13:07.190 08:13:12 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:07.190 08:13:12 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:07.190 08:13:12 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:07.190 08:13:12 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:07.190 08:13:12 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:07.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:07.758 Waiting for block devices as requested 00:13:07.758 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.018 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.018 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.018 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:13.293 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:13.293 08:13:18 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:13.293 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:13.294 nvme0n1 00:13:13.294 nvme1n1 00:13:13.294 nvme2n1 00:13:13.294 nvme2n2 00:13:13.294 nvme2n3 00:13:13.294 nvme3n1 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2cb21672-ad30-44aa-b39c-fdbe91b3ed0d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2cb21672-ad30-44aa-b39c-fdbe91b3ed0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a53268b5-27d6-4e2e-b2eb-de0628e3730a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a53268b5-27d6-4e2e-b2eb-de0628e3730a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ee6cad5f-5b3f-4d9e-b984-56537618e406"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee6cad5f-5b3f-4d9e-b984-56537618e406",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "583517ac-1eba-43de-8b0b-561e7c0c15e9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "583517ac-1eba-43de-8b0b-561e7c0c15e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a5311a69-5f91-4cd3-b91c-62f62e948833"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a5311a69-5f91-4cd3-b91c-62f62e948833",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "10a76e58-4c50-4cd6-b996-635c11f49f2b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "10a76e58-4c50-4cd6-b996-635c11f49f2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:13.294 08:13:18 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70150 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 70150 ']' 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 70150 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.294 08:13:18 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70150 00:13:13.554 killing process with pid 70150 00:13:13.554 08:13:18 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.554 08:13:18 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.554 08:13:18 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70150' 00:13:13.554 08:13:18 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 70150 00:13:13.554 08:13:18 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 70150 00:13:14.979 08:13:19 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:14.979 08:13:19 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:14.979 08:13:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:14.979 08:13:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.979 08:13:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.979 ************************************ 00:13:14.979 START TEST bdev_hello_world 00:13:14.979 ************************************ 00:13:14.979 08:13:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:15.237 [2024-11-17 08:13:20.079904] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:15.237 [2024-11-17 08:13:20.080114] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70516 ] 00:13:15.496 [2024-11-17 08:13:20.258534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.496 [2024-11-17 08:13:20.341347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.756 [2024-11-17 08:13:20.685638] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:15.756 [2024-11-17 08:13:20.685884] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:15.756 [2024-11-17 08:13:20.685917] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:15.756 [2024-11-17 08:13:20.688071] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:15.756 [2024-11-17 08:13:20.688388] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:15.756 [2024-11-17 08:13:20.688411] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:15.756 [2024-11-17 08:13:20.688605] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:15.756 00:13:15.756 [2024-11-17 08:13:20.688639] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:16.693 00:13:16.693 real 0m1.483s 00:13:16.693 user 0m1.181s 00:13:16.693 ************************************ 00:13:16.693 END TEST bdev_hello_world 00:13:16.693 ************************************ 00:13:16.693 sys 0m0.188s 00:13:16.693 08:13:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.693 08:13:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:16.693 08:13:21 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:16.693 08:13:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:16.693 08:13:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.693 08:13:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.693 ************************************ 00:13:16.693 START TEST bdev_bounds 00:13:16.693 ************************************ 00:13:16.693 08:13:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:16.693 08:13:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=70547 00:13:16.693 Process bdevio pid: 70547 00:13:16.693 08:13:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:16.693 08:13:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:16.693 08:13:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 70547' 00:13:16.694 08:13:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 70547 00:13:16.694 08:13:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 70547 ']' 00:13:16.694 08:13:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.694 08:13:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.694 08:13:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.694 08:13:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.694 08:13:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 [2024-11-17 08:13:21.627931] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:16.694 [2024-11-17 08:13:21.628340] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70547 ] 00:13:16.953 [2024-11-17 08:13:21.805115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:16.953 [2024-11-17 08:13:21.886916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.953 [2024-11-17 08:13:21.887023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.953 [2024-11-17 08:13:21.887049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.521 08:13:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.521 08:13:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:17.521 08:13:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:17.781 I/O targets: 00:13:17.781 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:17.781 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:17.781 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:17.781 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:17.781 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:17.781 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:17.781 00:13:17.781 00:13:17.781 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.781 http://cunit.sourceforge.net/ 00:13:17.781 00:13:17.781 00:13:17.781 Suite: bdevio tests on: nvme3n1 00:13:17.781 Test: blockdev write read block ...passed 00:13:17.781 Test: blockdev write zeroes read block ...passed 00:13:17.781 Test: blockdev write zeroes read no split ...passed 00:13:17.781 Test: blockdev write zeroes read split ...passed 00:13:17.781 Test: blockdev write zeroes read split partial ...passed 00:13:17.781 Test: blockdev reset ...passed 00:13:17.781 Test: blockdev write read 8 blocks ...passed 00:13:17.781 Test: blockdev write read size > 128k ...passed 00:13:17.781 Test: blockdev write read invalid size ...passed 00:13:17.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.781 Test: blockdev write read max offset ...passed 00:13:17.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.781 Test: blockdev writev readv 8 blocks ...passed 00:13:17.781 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.781 Test: blockdev writev readv block ...passed 00:13:17.781 Test: blockdev writev readv size > 128k ...passed 00:13:17.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.781 Test: blockdev comparev and writev ...passed 00:13:17.781 Test: blockdev nvme passthru rw ...passed 00:13:17.781 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.781 Test: blockdev nvme admin passthru ...passed 00:13:17.781 Test: blockdev copy ...passed 00:13:17.781 Suite: bdevio tests on: nvme2n3 00:13:17.781 Test: blockdev write read block ...passed 00:13:17.781 Test: blockdev write zeroes read block ...passed 00:13:17.781 Test: blockdev write zeroes read no split ...passed 00:13:17.781 Test: blockdev write zeroes read split ...passed 00:13:17.781 Test: blockdev write zeroes read split partial ...passed 00:13:17.781 Test: blockdev reset ...passed 00:13:17.781 Test: blockdev write read 8 blocks ...passed 00:13:17.781 Test: blockdev write read size > 128k ...passed 00:13:17.781 Test: blockdev write read invalid size ...passed 00:13:17.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.781 Test: blockdev write read max offset ...passed 00:13:17.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.781 Test: blockdev writev readv 8 blocks ...passed 00:13:17.781 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.781 Test: blockdev writev readv block ...passed 00:13:17.781 Test: blockdev writev readv size > 128k ...passed 00:13:17.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.781 Test: blockdev comparev and writev ...passed 00:13:17.781 Test: blockdev nvme passthru rw ...passed 00:13:17.781 Test: blockdev nvme passthru vendor specific ...passed 00:13:17.781 Test: blockdev nvme admin passthru ...passed 00:13:17.781 Test: blockdev copy ...passed 00:13:17.781 Suite: bdevio tests on: nvme2n2 00:13:17.781 Test: blockdev write read block ...passed 00:13:17.781 Test: blockdev write zeroes read block ...passed 00:13:17.781 Test: blockdev write zeroes read no split ...passed 00:13:17.781 Test: blockdev write zeroes read split ...passed 00:13:18.042 Test: blockdev write zeroes read split partial ...passed 00:13:18.042 Test: blockdev reset ...passed 00:13:18.042 Test: blockdev write read 8 blocks ...passed 00:13:18.042 Test: blockdev write read size > 128k ...passed 00:13:18.042 Test: blockdev write read invalid size ...passed 00:13:18.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.042 Test: blockdev write read max offset ...passed 00:13:18.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.042 Test: blockdev writev readv 8 blocks ...passed 00:13:18.042 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.042 Test: blockdev writev readv block ...passed 00:13:18.042 Test: blockdev writev readv size > 128k ...passed 00:13:18.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.042 Test: blockdev comparev and writev ...passed 00:13:18.042 Test: blockdev nvme passthru rw ...passed 00:13:18.042 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.042 Test: blockdev nvme admin passthru ...passed 00:13:18.042 Test: blockdev copy ...passed 00:13:18.042 Suite: bdevio tests on: nvme2n1 00:13:18.042 Test: blockdev write read block ...passed 00:13:18.042 Test: blockdev write zeroes read block ...passed 00:13:18.042 Test: blockdev write zeroes read no split ...passed 00:13:18.042 Test: blockdev write zeroes read split ...passed 00:13:18.042 Test: blockdev write zeroes read split partial ...passed 00:13:18.042 Test: blockdev reset ...passed 00:13:18.042 Test: blockdev write read 8 blocks ...passed 00:13:18.042 Test: blockdev write read size > 128k ...passed 00:13:18.042 Test: blockdev write read invalid size ...passed 00:13:18.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.042 Test: blockdev write read max offset ...passed 00:13:18.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.042 Test: blockdev writev readv 8 blocks ...passed 00:13:18.042 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.042 Test: blockdev writev readv block ...passed 00:13:18.042 Test: blockdev writev readv size > 128k ...passed 00:13:18.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.042 Test: blockdev comparev and writev ...passed 00:13:18.042 Test: blockdev nvme passthru rw ...passed 00:13:18.042 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.042 Test: blockdev nvme admin passthru ...passed 00:13:18.042 Test: blockdev copy ...passed 00:13:18.042 Suite: bdevio tests on: nvme1n1 00:13:18.042 Test: blockdev write read block ...passed 00:13:18.042 Test: blockdev write zeroes read block ...passed 00:13:18.042 Test: blockdev write zeroes read no split ...passed 00:13:18.042 Test: blockdev write zeroes read split ...passed 00:13:18.042 Test: blockdev write zeroes read split partial ...passed 00:13:18.042 Test: blockdev reset ...passed 00:13:18.042 Test: blockdev write read 8 blocks ...passed 00:13:18.042 Test: blockdev write read size > 128k ...passed 00:13:18.042 Test: blockdev write read invalid size ...passed 00:13:18.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.042 Test: blockdev write read max offset ...passed 00:13:18.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.042 Test: blockdev writev readv 8 blocks ...passed 00:13:18.042 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.042 Test: blockdev writev readv block ...passed 00:13:18.042 Test: blockdev writev readv size > 128k ...passed 00:13:18.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.042 Test: blockdev comparev and writev ...passed 00:13:18.042 Test: blockdev nvme passthru rw ...passed 00:13:18.042 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.042 Test: blockdev nvme admin passthru ...passed 00:13:18.042 Test: blockdev copy ...passed 00:13:18.042 Suite: bdevio tests on: nvme0n1 00:13:18.042 Test: blockdev write read block ...passed 00:13:18.042 Test: blockdev write zeroes read block ...passed 00:13:18.042 Test: blockdev write zeroes read no split ...passed 00:13:18.042 Test: blockdev write zeroes read split ...passed 00:13:18.042 Test: blockdev write zeroes read split partial ...passed 00:13:18.042 Test: blockdev reset ...passed 00:13:18.042 Test: blockdev write read 8 blocks ...passed 00:13:18.042 Test: blockdev write read size > 128k ...passed 00:13:18.042 Test: blockdev write read invalid size ...passed 00:13:18.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:18.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:18.042 Test: blockdev write read max offset ...passed 00:13:18.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:18.042 Test: blockdev writev readv 8 blocks ...passed 00:13:18.042 Test: blockdev writev readv 30 x 1block ...passed 00:13:18.042 Test: blockdev writev readv block ...passed 00:13:18.042 Test: blockdev writev readv size > 128k ...passed 00:13:18.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:18.042 Test: blockdev comparev and writev ...passed 00:13:18.042 Test: blockdev nvme passthru rw ...passed 00:13:18.042 Test: blockdev nvme passthru vendor specific ...passed 00:13:18.042 Test: blockdev nvme admin passthru ...passed 00:13:18.042 Test: blockdev copy ...passed 00:13:18.042 00:13:18.042 Run Summary: Type Total Ran Passed Failed Inactive 00:13:18.042 suites 6 6 n/a 0 0 00:13:18.042 tests 138 138 138 0 0 00:13:18.042 asserts 780 780 780 0 n/a 00:13:18.042 00:13:18.042 Elapsed time = 0.983 seconds 00:13:18.042 0 00:13:18.042 08:13:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 70547 00:13:18.042 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 70547 ']' 00:13:18.042 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 70547 00:13:18.042 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:18.042 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.042 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70547 00:13:18.302 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.302 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.302 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70547' 00:13:18.302 killing process with pid 70547 00:13:18.302 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 70547 00:13:18.302 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 70547 00:13:18.870 08:13:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:18.870 00:13:18.870 real 0m2.359s 00:13:18.870 user 0m6.000s 00:13:18.870 sys 0m0.312s 00:13:18.870 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.130 08:13:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:19.130 ************************************ 00:13:19.130 END TEST bdev_bounds 00:13:19.130 ************************************ 00:13:19.130 08:13:23 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:19.130 08:13:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:19.130 08:13:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.130 08:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.130 ************************************ 00:13:19.130 START TEST bdev_nbd 00:13:19.130 ************************************ 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70606 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70606 /var/tmp/spdk-nbd.sock 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 70606 ']' 00:13:19.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.130 08:13:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:19.130 [2024-11-17 08:13:24.013753] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:19.130 [2024-11-17 08:13:24.013918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.389 [2024-11-17 08:13:24.175771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.389 [2024-11-17 08:13:24.260387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:19.958 08:13:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.217 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.218 1+0 records in 00:13:20.218 1+0 records out 00:13:20.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647362 s, 6.3 MB/s 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:20.218 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.787 1+0 records in 00:13:20.787 1+0 records out 00:13:20.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674257 s, 6.1 MB/s 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:20.787 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.047 1+0 records in 00:13:21.047 1+0 records out 00:13:21.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072583 s, 5.6 MB/s 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:21.047 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.048 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:21.048 08:13:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.307 1+0 records in 00:13:21.307 1+0 records out 00:13:21.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552067 s, 7.4 MB/s 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:21.307 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.567 1+0 records in 00:13:21.567 1+0 records out 00:13:21.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053947 s, 7.6 MB/s 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:21.567 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.826 1+0 records in 00:13:21.826 1+0 records out 00:13:21.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775268 s, 5.3 MB/s 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:21.826 08:13:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:22.084 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd0", 00:13:22.084 "bdev_name": "nvme0n1" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd1", 00:13:22.084 "bdev_name": "nvme1n1" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd2", 00:13:22.084 "bdev_name": "nvme2n1" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd3", 00:13:22.084 "bdev_name": "nvme2n2" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd4", 00:13:22.084 "bdev_name": "nvme2n3" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd5", 00:13:22.084 "bdev_name": "nvme3n1" 00:13:22.084 } 00:13:22.084 ]' 00:13:22.084 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:22.084 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd0", 00:13:22.084 "bdev_name": "nvme0n1" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd1", 00:13:22.084 "bdev_name": "nvme1n1" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd2", 00:13:22.084 "bdev_name": "nvme2n1" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd3", 00:13:22.084 "bdev_name": "nvme2n2" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd4", 00:13:22.084 "bdev_name": "nvme2n3" 00:13:22.084 }, 00:13:22.084 { 00:13:22.084 "nbd_device": "/dev/nbd5", 00:13:22.084 "bdev_name": "nvme3n1" 00:13:22.084 } 00:13:22.084 ]' 00:13:22.084 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:22.342 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:22.342 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.342 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:22.342 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:22.342 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:22.342 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.342 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:22.601 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:22.861 08:13:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.429 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.688 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:23.947 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:23.947 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:23.947 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:24.205 08:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:24.463 /dev/nbd0 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.463 1+0 records in 00:13:24.463 1+0 records out 00:13:24.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481797 s, 8.5 MB/s 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:24.463 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:24.722 /dev/nbd1 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.722 1+0 records in 00:13:24.722 1+0 records out 00:13:24.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851463 s, 4.8 MB/s 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:24.722 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:24.722 /dev/nbd10 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.981 1+0 records in 00:13:24.981 1+0 records out 00:13:24.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555514 s, 7.4 MB/s 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:24.981 08:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:25.240 /dev/nbd11 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.240 1+0 records in 00:13:25.240 1+0 records out 00:13:25.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733078 s, 5.6 MB/s 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:25.240 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.241 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:25.241 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:25.500 /dev/nbd12 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.500 1+0 records in 00:13:25.500 1+0 records out 00:13:25.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712676 s, 5.7 MB/s 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:25.500 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:25.759 /dev/nbd13 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.759 1+0 records in 00:13:25.759 1+0 records out 00:13:25.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874722 s, 4.7 MB/s 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.759 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd0", 00:13:26.018 "bdev_name": "nvme0n1" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd1", 00:13:26.018 "bdev_name": "nvme1n1" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd10", 00:13:26.018 "bdev_name": "nvme2n1" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd11", 00:13:26.018 "bdev_name": "nvme2n2" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd12", 00:13:26.018 "bdev_name": "nvme2n3" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd13", 00:13:26.018 "bdev_name": "nvme3n1" 00:13:26.018 } 00:13:26.018 ]' 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd0", 00:13:26.018 "bdev_name": "nvme0n1" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd1", 00:13:26.018 "bdev_name": "nvme1n1" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd10", 00:13:26.018 "bdev_name": "nvme2n1" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd11", 00:13:26.018 "bdev_name": "nvme2n2" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd12", 00:13:26.018 "bdev_name": "nvme2n3" 00:13:26.018 }, 00:13:26.018 { 00:13:26.018 "nbd_device": "/dev/nbd13", 00:13:26.018 "bdev_name": "nvme3n1" 00:13:26.018 } 00:13:26.018 ]' 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:26.018 /dev/nbd1 00:13:26.018 /dev/nbd10 00:13:26.018 /dev/nbd11 00:13:26.018 /dev/nbd12 00:13:26.018 /dev/nbd13' 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:26.018 /dev/nbd1 00:13:26.018 /dev/nbd10 00:13:26.018 /dev/nbd11 00:13:26.018 /dev/nbd12 00:13:26.018 /dev/nbd13' 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:26.018 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:26.019 256+0 records in 00:13:26.019 256+0 records out 00:13:26.019 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00770569 s, 136 MB/s 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:26.019 08:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:26.278 256+0 records in 00:13:26.278 256+0 records out 00:13:26.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162666 s, 6.4 MB/s 00:13:26.278 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:26.278 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:26.538 256+0 records in 00:13:26.538 256+0 records out 00:13:26.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183803 s, 5.7 MB/s 00:13:26.538 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:26.538 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:26.538 256+0 records in 00:13:26.538 256+0 records out 00:13:26.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172695 s, 6.1 MB/s 00:13:26.538 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:26.538 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:26.798 256+0 records in 00:13:26.798 256+0 records out 00:13:26.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168259 s, 6.2 MB/s 00:13:26.798 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:26.798 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:26.798 256+0 records in 00:13:26.798 256+0 records out 00:13:26.798 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14546 s, 7.2 MB/s 00:13:26.798 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:26.798 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:27.057 256+0 records in 00:13:27.057 256+0 records out 00:13:27.057 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143193 s, 7.3 MB/s 00:13:27.057 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:27.057 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.058 08:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.317 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.577 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.146 08:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.405 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.665 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.924 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:28.924 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:28.924 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:29.184 08:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:29.184 malloc_lvol_verify 00:13:29.443 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:29.701 8d282464-b344-414e-9017-189918a5dcb5 00:13:29.702 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:29.960 52d16c0d-d5a9-43e9-8299-e440c12a9ad6 00:13:29.960 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:30.219 /dev/nbd0 00:13:30.219 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:30.219 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:30.219 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:30.219 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:30.219 08:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:30.219 mke2fs 1.47.0 (5-Feb-2023) 00:13:30.219 Discarding device blocks: 0/4096 done 00:13:30.219 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:30.219 00:13:30.219 Allocating group tables: 0/1 done 00:13:30.219 Writing inode tables: 0/1 done 00:13:30.219 Creating journal (1024 blocks): done 00:13:30.219 Writing superblocks and filesystem accounting information: 0/1 done 00:13:30.219 00:13:30.219 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:30.219 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.219 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.219 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.219 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:30.219 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.219 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70606 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 70606 ']' 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 70606 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70606 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.479 killing process with pid 70606 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70606' 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 70606 00:13:30.479 08:13:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 70606 00:13:31.417 08:13:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:31.417 00:13:31.417 real 0m12.215s 00:13:31.417 user 0m17.395s 00:13:31.417 sys 0m4.003s 00:13:31.417 08:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.417 08:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:31.417 ************************************ 00:13:31.417 END TEST bdev_nbd 00:13:31.417 ************************************ 00:13:31.417 08:13:36 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:31.417 08:13:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:31.417 08:13:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:31.417 08:13:36 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:31.417 08:13:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.417 08:13:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.417 08:13:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.417 ************************************ 00:13:31.417 START TEST bdev_fio 00:13:31.417 ************************************ 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:31.417 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.417 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:31.418 ************************************ 00:13:31.418 START TEST bdev_fio_rw_verify 00:13:31.418 ************************************ 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:31.418 08:13:36 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.677 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.677 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.677 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.677 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.677 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.677 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.677 fio-3.35 00:13:31.677 Starting 6 threads 00:13:43.903 00:13:43.903 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71023: Sun Nov 17 08:13:47 2024 00:13:43.903 read: IOPS=28.7k, BW=112MiB/s (118MB/s)(1123MiB/10001msec) 00:13:43.903 slat (usec): min=2, max=890, avg= 7.27, stdev= 5.38 00:13:43.903 clat (usec): min=94, max=3579, avg=659.74, stdev=217.64 00:13:43.903 lat (usec): min=98, max=3587, avg=667.01, stdev=218.65 00:13:43.903 clat percentiles (usec): 00:13:43.903 | 50.000th=[ 701], 99.000th=[ 1123], 99.900th=[ 1614], 99.990th=[ 3130], 00:13:43.903 | 99.999th=[ 3556] 00:13:43.903 write: IOPS=28.9k, BW=113MiB/s (119MB/s)(1131MiB/10001msec); 0 zone resets 00:13:43.903 slat (usec): min=12, max=1723, avg=24.88, stdev=23.94 00:13:43.903 clat (usec): min=82, max=11763, avg=744.08, stdev=236.23 00:13:43.903 lat (usec): min=107, max=11781, avg=768.95, stdev=237.43 00:13:43.903 clat percentiles (usec): 00:13:43.903 | 50.000th=[ 766], 99.000th=[ 1303], 99.900th=[ 2311], 99.990th=[ 4047], 00:13:43.903 | 99.999th=[11338] 00:13:43.903 bw ( KiB/s): min=98704, max=140656, per=100.00%, avg=115867.47, stdev=2082.80, samples=114 00:13:43.903 iops : min=24676, max=35164, avg=28966.74, stdev=520.69, samples=114 00:13:43.903 lat (usec) : 100=0.01%, 250=2.62%, 500=16.75%, 750=34.01%, 1000=41.16% 00:13:43.903 lat (msec) : 2=5.35%, 4=0.10%, 10=0.01%, 20=0.01% 00:13:43.903 cpu : usr=61.23%, sys=26.66%, ctx=6759, majf=0, minf=24442 00:13:43.903 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:43.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.903 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.903 issued rwts: total=287472,289510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.903 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:43.903 00:13:43.903 Run status group 0 (all jobs): 00:13:43.903 READ: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=1123MiB (1177MB), run=10001-10001msec 00:13:43.903 WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=1131MiB (1186MB), run=10001-10001msec 00:13:43.903 ----------------------------------------------------- 00:13:43.903 Suppressions used: 00:13:43.903 count bytes template 00:13:43.903 6 48 /usr/src/fio/parse.c 00:13:43.903 1834 176064 /usr/src/fio/iolog.c 00:13:43.903 1 8 libtcmalloc_minimal.so 00:13:43.903 1 904 libcrypto.so 00:13:43.903 ----------------------------------------------------- 00:13:43.903 00:13:43.903 00:13:43.903 real 0m12.126s 00:13:43.903 user 0m38.411s 00:13:43.903 sys 0m16.335s 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:43.903 ************************************ 00:13:43.903 END TEST bdev_fio_rw_verify 00:13:43.903 ************************************ 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2cb21672-ad30-44aa-b39c-fdbe91b3ed0d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2cb21672-ad30-44aa-b39c-fdbe91b3ed0d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a53268b5-27d6-4e2e-b2eb-de0628e3730a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a53268b5-27d6-4e2e-b2eb-de0628e3730a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ee6cad5f-5b3f-4d9e-b984-56537618e406"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee6cad5f-5b3f-4d9e-b984-56537618e406",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "583517ac-1eba-43de-8b0b-561e7c0c15e9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "583517ac-1eba-43de-8b0b-561e7c0c15e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a5311a69-5f91-4cd3-b91c-62f62e948833"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a5311a69-5f91-4cd3-b91c-62f62e948833",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "10a76e58-4c50-4cd6-b996-635c11f49f2b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "10a76e58-4c50-4cd6-b996-635c11f49f2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:43.903 /home/vagrant/spdk_repo/spdk 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:43.903 08:13:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:43.903 00:13:43.904 real 0m12.315s 00:13:43.904 user 0m38.519s 00:13:43.904 sys 0m16.413s 00:13:43.904 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.904 08:13:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:43.904 ************************************ 00:13:43.904 END TEST bdev_fio 00:13:43.904 ************************************ 00:13:43.904 08:13:48 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:43.904 08:13:48 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:43.904 08:13:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:43.904 08:13:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.904 08:13:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.904 ************************************ 00:13:43.904 START TEST bdev_verify 00:13:43.904 ************************************ 00:13:43.904 08:13:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:43.904 [2024-11-17 08:13:48.667847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:43.904 [2024-11-17 08:13:48.668015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71199 ] 00:13:43.904 [2024-11-17 08:13:48.843061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.163 [2024-11-17 08:13:48.939819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.163 [2024-11-17 08:13:48.939831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.421 Running I/O for 5 seconds... 00:13:46.735 24800.00 IOPS, 96.88 MiB/s [2024-11-17T08:13:52.717Z] 25840.00 IOPS, 100.94 MiB/s [2024-11-17T08:13:53.744Z] 26005.33 IOPS, 101.58 MiB/s [2024-11-17T08:13:54.683Z] 25736.00 IOPS, 100.53 MiB/s [2024-11-17T08:13:54.683Z] 25600.00 IOPS, 100.00 MiB/s 00:13:49.671 Latency(us) 00:13:49.671 [2024-11-17T08:13:54.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.671 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x0 length 0xa0000 00:13:49.671 nvme0n1 : 5.05 1900.67 7.42 0.00 0.00 67229.02 8102.63 66727.56 00:13:49.671 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0xa0000 length 0xa0000 00:13:49.671 nvme0n1 : 5.05 1824.18 7.13 0.00 0.00 70050.43 15013.70 63391.19 00:13:49.671 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x0 length 0xbd0bd 00:13:49.671 nvme1n1 : 5.02 3464.78 13.53 0.00 0.00 36787.87 4825.83 47185.92 00:13:49.671 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:49.671 nvme1n1 : 5.05 3221.58 12.58 0.00 0.00 39567.87 5600.35 54335.30 00:13:49.671 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x0 length 0x80000 00:13:49.671 nvme2n1 : 5.06 1898.63 7.42 0.00 0.00 66910.83 8043.05 51713.86 00:13:49.671 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x80000 length 0x80000 00:13:49.671 nvme2n1 : 5.06 1847.44 7.22 0.00 0.00 68956.82 6285.50 69110.69 00:13:49.671 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x0 length 0x80000 00:13:49.671 nvme2n2 : 5.06 1898.04 7.41 0.00 0.00 66796.24 8519.68 62437.93 00:13:49.671 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x80000 length 0x80000 00:13:49.671 nvme2n2 : 5.04 1828.72 7.14 0.00 0.00 69475.38 8102.63 69110.69 00:13:49.671 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x0 length 0x80000 00:13:49.671 nvme2n3 : 5.06 1896.91 7.41 0.00 0.00 66740.49 8996.31 67204.19 00:13:49.671 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x80000 length 0x80000 00:13:49.671 nvme2n3 : 5.05 1825.01 7.13 0.00 0.00 69506.70 10664.49 65297.69 00:13:49.671 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x0 length 0x20000 00:13:49.671 nvme3n1 : 5.07 1919.11 7.50 0.00 0.00 65913.68 4110.89 70063.94 00:13:49.671 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.671 Verification LBA range: start 0x20000 length 0x20000 00:13:49.671 nvme3n1 : 5.06 1823.14 7.12 0.00 0.00 69476.14 4021.53 70063.94 00:13:49.671 [2024-11-17T08:13:54.683Z] =================================================================================================================== 00:13:49.671 [2024-11-17T08:13:54.683Z] Total : 25348.20 99.02 0.00 0.00 60199.92 4021.53 70063.94 00:13:50.610 00:13:50.610 real 0m6.737s 00:13:50.610 user 0m10.418s 00:13:50.610 sys 0m1.875s 00:13:50.610 08:13:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.610 ************************************ 00:13:50.610 END TEST bdev_verify 00:13:50.610 ************************************ 00:13:50.610 08:13:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:50.610 08:13:55 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.610 08:13:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:50.610 08:13:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.610 08:13:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.610 ************************************ 00:13:50.610 START TEST bdev_verify_big_io 00:13:50.610 ************************************ 00:13:50.610 08:13:55 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.610 [2024-11-17 08:13:55.432711] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:50.610 [2024-11-17 08:13:55.432849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71294 ] 00:13:50.610 [2024-11-17 08:13:55.594505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.869 [2024-11-17 08:13:55.685763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.869 [2024-11-17 08:13:55.685777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.438 Running I/O for 5 seconds... 00:13:57.009 1936.00 IOPS, 121.00 MiB/s [2024-11-17T08:14:02.280Z] 2792.00 IOPS, 174.50 MiB/s [2024-11-17T08:14:02.280Z] 3210.67 IOPS, 200.67 MiB/s 00:13:57.268 Latency(us) 00:13:57.268 [2024-11-17T08:14:02.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.268 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.268 Verification LBA range: start 0x0 length 0xa000 00:13:57.268 nvme0n1 : 5.90 132.95 8.31 0.00 0.00 936532.24 84839.33 1670095.59 00:13:57.268 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.268 Verification LBA range: start 0xa000 length 0xa000 00:13:57.268 nvme0n1 : 5.92 129.65 8.10 0.00 0.00 956780.92 148707.14 1288795.23 00:13:57.268 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.268 Verification LBA range: start 0x0 length 0xbd0b 00:13:57.268 nvme1n1 : 5.90 173.56 10.85 0.00 0.00 698080.76 8638.84 739722.71 00:13:57.268 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.268 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:57.268 nvme1n1 : 5.90 162.58 10.16 0.00 0.00 741777.41 72923.69 804543.77 00:13:57.268 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.268 Verification LBA range: start 0x0 length 0x8000 00:13:57.268 nvme2n1 : 5.92 62.15 3.88 0.00 0.00 1909232.05 165865.66 3874011.69 00:13:57.268 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.268 Verification LBA range: start 0x8000 length 0x8000 00:13:57.268 nvme2n1 : 5.93 151.19 9.45 0.00 0.00 774487.64 172538.41 793104.76 00:13:57.268 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.269 Verification LBA range: start 0x0 length 0x8000 00:13:57.269 nvme2n2 : 5.91 144.96 9.06 0.00 0.00 793997.59 85792.58 796917.76 00:13:57.269 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.269 Verification LBA range: start 0x8000 length 0x8000 00:13:57.269 nvme2n2 : 5.91 105.59 6.60 0.00 0.00 1081139.30 151566.89 2318306.21 00:13:57.269 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.269 Verification LBA range: start 0x0 length 0x8000 00:13:57.269 nvme2n3 : 5.89 153.50 9.59 0.00 0.00 730222.36 98661.47 854112.81 00:13:57.269 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.269 Verification LBA range: start 0x8000 length 0x8000 00:13:57.269 nvme2n3 : 5.93 140.22 8.76 0.00 0.00 800247.55 16443.58 1448941.38 00:13:57.269 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.269 Verification LBA range: start 0x0 length 0x2000 00:13:57.269 nvme3n1 : 5.91 157.03 9.81 0.00 0.00 702130.19 6047.19 1387933.32 00:13:57.269 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.269 Verification LBA range: start 0x2000 length 0x2000 00:13:57.269 nvme3n1 : 5.93 116.03 7.25 0.00 0.00 941329.06 7536.64 2516582.40 00:13:57.269 [2024-11-17T08:14:02.281Z] =================================================================================================================== 00:13:57.269 [2024-11-17T08:14:02.281Z] Total : 1629.40 101.84 0.00 0.00 858799.77 6047.19 3874011.69 00:13:58.207 00:13:58.207 real 0m7.847s 00:13:58.207 user 0m14.307s 00:13:58.207 sys 0m0.556s 00:13:58.207 08:14:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.207 ************************************ 00:13:58.207 END TEST bdev_verify_big_io 00:13:58.207 ************************************ 00:13:58.207 08:14:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.467 08:14:03 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.467 08:14:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:58.467 08:14:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.467 08:14:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.467 ************************************ 00:13:58.467 START TEST bdev_write_zeroes 00:13:58.467 ************************************ 00:13:58.467 08:14:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.467 [2024-11-17 08:14:03.335846] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:58.467 [2024-11-17 08:14:03.335968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:13:58.726 [2024-11-17 08:14:03.497697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.726 [2024-11-17 08:14:03.576991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.985 Running I/O for 1 seconds... 00:14:00.364 68192.00 IOPS, 266.38 MiB/s 00:14:00.364 Latency(us) 00:14:00.364 [2024-11-17T08:14:05.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.364 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.364 nvme0n1 : 1.03 10316.03 40.30 0.00 0.00 12395.41 6434.44 22163.08 00:14:00.364 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.364 nvme1n1 : 1.03 16040.99 62.66 0.00 0.00 7962.97 4051.32 24069.59 00:14:00.364 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.365 nvme2n1 : 1.03 10300.72 40.24 0.00 0.00 12341.12 4319.42 21209.83 00:14:00.365 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.365 nvme2n2 : 1.03 10285.82 40.18 0.00 0.00 12349.45 4289.63 23354.65 00:14:00.365 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.365 nvme2n3 : 1.03 10270.98 40.12 0.00 0.00 12360.23 4438.57 25380.31 00:14:00.365 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.365 nvme3n1 : 1.04 10256.22 40.06 0.00 0.00 12370.37 4676.89 27405.96 00:14:00.365 [2024-11-17T08:14:05.377Z] =================================================================================================================== 00:14:00.365 [2024-11-17T08:14:05.377Z] Total : 67470.76 263.56 0.00 0.00 11321.40 4051.32 27405.96 00:14:00.934 00:14:00.934 real 0m2.559s 00:14:00.934 user 0m1.868s 00:14:00.934 sys 0m0.514s 00:14:00.934 08:14:05 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.934 ************************************ 00:14:00.934 END TEST bdev_write_zeroes 00:14:00.934 ************************************ 00:14:00.934 08:14:05 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:00.934 08:14:05 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.934 08:14:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:00.934 08:14:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.934 08:14:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.934 ************************************ 00:14:00.934 START TEST bdev_json_nonenclosed 00:14:00.934 ************************************ 00:14:00.934 08:14:05 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.193 [2024-11-17 08:14:05.978856] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:01.193 [2024-11-17 08:14:05.979579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71448 ] 00:14:01.193 [2024-11-17 08:14:06.158519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.452 [2024-11-17 08:14:06.239293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.452 [2024-11-17 08:14:06.239410] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:01.452 [2024-11-17 08:14:06.239434] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:01.452 [2024-11-17 08:14:06.239446] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:01.452 00:14:01.452 real 0m0.569s 00:14:01.452 user 0m0.350s 00:14:01.452 sys 0m0.114s 00:14:01.452 08:14:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.452 08:14:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:01.452 ************************************ 00:14:01.452 END TEST bdev_json_nonenclosed 00:14:01.452 ************************************ 00:14:01.712 08:14:06 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.712 08:14:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:01.712 08:14:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.712 08:14:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.712 ************************************ 00:14:01.712 START TEST bdev_json_nonarray 00:14:01.712 ************************************ 00:14:01.712 08:14:06 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.712 [2024-11-17 08:14:06.603926] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:01.712 [2024-11-17 08:14:06.604120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71479 ] 00:14:01.971 [2024-11-17 08:14:06.782895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.971 [2024-11-17 08:14:06.863209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.971 [2024-11-17 08:14:06.863311] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:01.971 [2024-11-17 08:14:06.863346] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:01.971 [2024-11-17 08:14:06.863376] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:02.229 00:14:02.230 real 0m0.573s 00:14:02.230 user 0m0.349s 00:14:02.230 sys 0m0.119s 00:14:02.230 08:14:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.230 08:14:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:02.230 ************************************ 00:14:02.230 END TEST bdev_json_nonarray 00:14:02.230 ************************************ 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:02.230 08:14:07 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:02.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:04.703 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.703 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.962 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:04.962 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:05.221 00:14:05.221 real 0m59.156s 00:14:05.221 user 1m40.666s 00:14:05.221 sys 0m28.800s 00:14:05.221 ************************************ 00:14:05.221 END TEST blockdev_xnvme 00:14:05.221 ************************************ 00:14:05.221 08:14:09 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.221 08:14:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.221 08:14:10 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:05.221 08:14:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.221 08:14:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.221 08:14:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.221 ************************************ 00:14:05.221 START TEST ublk 00:14:05.221 ************************************ 00:14:05.221 08:14:10 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:05.221 * Looking for test storage... 00:14:05.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:05.221 08:14:10 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.221 08:14:10 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.221 08:14:10 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.221 08:14:10 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.222 08:14:10 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.222 08:14:10 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.222 08:14:10 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.222 08:14:10 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.222 08:14:10 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.222 08:14:10 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.222 08:14:10 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.222 08:14:10 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.222 08:14:10 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.222 08:14:10 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.222 08:14:10 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.222 08:14:10 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:05.222 08:14:10 ublk -- scripts/common.sh@345 -- # : 1 00:14:05.222 08:14:10 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.222 08:14:10 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.222 08:14:10 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:05.222 08:14:10 ublk -- scripts/common.sh@353 -- # local d=1 00:14:05.222 08:14:10 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.222 08:14:10 ublk -- scripts/common.sh@355 -- # echo 1 00:14:05.222 08:14:10 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.481 08:14:10 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:05.481 08:14:10 ublk -- scripts/common.sh@353 -- # local d=2 00:14:05.481 08:14:10 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.481 08:14:10 ublk -- scripts/common.sh@355 -- # echo 2 00:14:05.481 08:14:10 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.481 08:14:10 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.481 08:14:10 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.481 08:14:10 ublk -- scripts/common.sh@368 -- # return 0 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.481 --rc genhtml_branch_coverage=1 00:14:05.481 --rc genhtml_function_coverage=1 00:14:05.481 --rc genhtml_legend=1 00:14:05.481 --rc geninfo_all_blocks=1 00:14:05.481 --rc geninfo_unexecuted_blocks=1 00:14:05.481 00:14:05.481 ' 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.481 --rc genhtml_branch_coverage=1 00:14:05.481 --rc genhtml_function_coverage=1 00:14:05.481 --rc genhtml_legend=1 00:14:05.481 --rc geninfo_all_blocks=1 00:14:05.481 --rc geninfo_unexecuted_blocks=1 00:14:05.481 00:14:05.481 ' 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.481 --rc genhtml_branch_coverage=1 00:14:05.481 --rc genhtml_function_coverage=1 00:14:05.481 --rc genhtml_legend=1 00:14:05.481 --rc geninfo_all_blocks=1 00:14:05.481 --rc geninfo_unexecuted_blocks=1 00:14:05.481 00:14:05.481 ' 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.481 --rc genhtml_branch_coverage=1 00:14:05.481 --rc genhtml_function_coverage=1 00:14:05.481 --rc genhtml_legend=1 00:14:05.481 --rc geninfo_all_blocks=1 00:14:05.481 --rc geninfo_unexecuted_blocks=1 00:14:05.481 00:14:05.481 ' 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:05.481 08:14:10 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:05.481 08:14:10 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:05.481 08:14:10 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:05.481 08:14:10 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:05.481 08:14:10 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:05.481 08:14:10 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:05.481 08:14:10 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:05.481 08:14:10 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:05.481 08:14:10 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.481 08:14:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.481 ************************************ 00:14:05.481 START TEST test_save_ublk_config 00:14:05.481 ************************************ 00:14:05.481 08:14:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:14:05.481 08:14:10 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71773 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71773 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 71773 ']' 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.482 08:14:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:05.482 [2024-11-17 08:14:10.392871] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:05.482 [2024-11-17 08:14:10.393045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71773 ] 00:14:05.741 [2024-11-17 08:14:10.564379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.741 [2024-11-17 08:14:10.645104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:06.679 [2024-11-17 08:14:11.419155] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:06.679 [2024-11-17 08:14:11.420198] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:06.679 malloc0 00:14:06.679 [2024-11-17 08:14:11.479362] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:06.679 [2024-11-17 08:14:11.479487] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:06.679 [2024-11-17 08:14:11.479503] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:06.679 [2024-11-17 08:14:11.479511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:06.679 [2024-11-17 08:14:11.487207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:06.679 [2024-11-17 08:14:11.487232] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:06.679 [2024-11-17 08:14:11.495173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:06.679 [2024-11-17 08:14:11.495273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:06.679 [2024-11-17 08:14:11.519186] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:06.679 0 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.679 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:06.939 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.939 08:14:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:06.939 "subsystems": [ 00:14:06.939 { 00:14:06.939 "subsystem": "fsdev", 00:14:06.939 "config": [ 00:14:06.939 { 00:14:06.939 "method": "fsdev_set_opts", 00:14:06.939 "params": { 00:14:06.939 "fsdev_io_pool_size": 65535, 00:14:06.939 "fsdev_io_cache_size": 256 00:14:06.939 } 00:14:06.939 } 00:14:06.939 ] 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "subsystem": "keyring", 00:14:06.939 "config": [] 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "subsystem": "iobuf", 00:14:06.939 "config": [ 00:14:06.939 { 00:14:06.939 "method": "iobuf_set_options", 00:14:06.939 "params": { 00:14:06.939 "small_pool_count": 8192, 00:14:06.939 "large_pool_count": 1024, 00:14:06.939 "small_bufsize": 8192, 00:14:06.939 "large_bufsize": 135168, 00:14:06.939 "enable_numa": false 00:14:06.939 } 00:14:06.939 } 00:14:06.939 ] 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "subsystem": "sock", 00:14:06.939 "config": [ 00:14:06.939 { 00:14:06.939 "method": "sock_set_default_impl", 00:14:06.939 "params": { 00:14:06.939 "impl_name": "posix" 00:14:06.939 } 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "method": "sock_impl_set_options", 00:14:06.939 "params": { 00:14:06.939 "impl_name": "ssl", 00:14:06.939 "recv_buf_size": 4096, 00:14:06.939 "send_buf_size": 4096, 00:14:06.939 "enable_recv_pipe": true, 00:14:06.939 "enable_quickack": false, 00:14:06.939 "enable_placement_id": 0, 00:14:06.939 "enable_zerocopy_send_server": true, 00:14:06.939 "enable_zerocopy_send_client": false, 00:14:06.939 "zerocopy_threshold": 0, 00:14:06.939 "tls_version": 0, 00:14:06.939 "enable_ktls": false 00:14:06.939 } 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "method": "sock_impl_set_options", 00:14:06.939 "params": { 00:14:06.939 "impl_name": "posix", 00:14:06.939 "recv_buf_size": 2097152, 00:14:06.939 "send_buf_size": 2097152, 00:14:06.939 "enable_recv_pipe": true, 00:14:06.939 "enable_quickack": false, 00:14:06.939 "enable_placement_id": 0, 00:14:06.939 "enable_zerocopy_send_server": true, 00:14:06.939 "enable_zerocopy_send_client": false, 00:14:06.939 "zerocopy_threshold": 0, 00:14:06.939 "tls_version": 0, 00:14:06.939 "enable_ktls": false 00:14:06.939 } 00:14:06.939 } 00:14:06.939 ] 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "subsystem": "vmd", 00:14:06.939 "config": [] 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "subsystem": "accel", 00:14:06.939 "config": [ 00:14:06.939 { 00:14:06.939 "method": "accel_set_options", 00:14:06.939 "params": { 00:14:06.939 "small_cache_size": 128, 00:14:06.939 "large_cache_size": 16, 00:14:06.939 "task_count": 2048, 00:14:06.939 "sequence_count": 2048, 00:14:06.939 "buf_count": 2048 00:14:06.939 } 00:14:06.939 } 00:14:06.939 ] 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "subsystem": "bdev", 00:14:06.939 "config": [ 00:14:06.939 { 00:14:06.939 "method": "bdev_set_options", 00:14:06.939 "params": { 00:14:06.939 "bdev_io_pool_size": 65535, 00:14:06.939 "bdev_io_cache_size": 256, 00:14:06.939 "bdev_auto_examine": true, 00:14:06.939 "iobuf_small_cache_size": 128, 00:14:06.939 "iobuf_large_cache_size": 16 00:14:06.939 } 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "method": "bdev_raid_set_options", 00:14:06.939 "params": { 00:14:06.939 "process_window_size_kb": 1024, 00:14:06.939 "process_max_bandwidth_mb_sec": 0 00:14:06.939 } 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "method": "bdev_iscsi_set_options", 00:14:06.939 "params": { 00:14:06.939 "timeout_sec": 30 00:14:06.939 } 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "method": "bdev_nvme_set_options", 00:14:06.939 "params": { 00:14:06.940 "action_on_timeout": "none", 00:14:06.940 "timeout_us": 0, 00:14:06.940 "timeout_admin_us": 0, 00:14:06.940 "keep_alive_timeout_ms": 10000, 00:14:06.940 "arbitration_burst": 0, 00:14:06.940 "low_priority_weight": 0, 00:14:06.940 "medium_priority_weight": 0, 00:14:06.940 "high_priority_weight": 0, 00:14:06.940 "nvme_adminq_poll_period_us": 10000, 00:14:06.940 "nvme_ioq_poll_period_us": 0, 00:14:06.940 "io_queue_requests": 0, 00:14:06.940 "delay_cmd_submit": true, 00:14:06.940 "transport_retry_count": 4, 00:14:06.940 "bdev_retry_count": 3, 00:14:06.940 "transport_ack_timeout": 0, 00:14:06.940 "ctrlr_loss_timeout_sec": 0, 00:14:06.940 "reconnect_delay_sec": 0, 00:14:06.940 "fast_io_fail_timeout_sec": 0, 00:14:06.940 "disable_auto_failback": false, 00:14:06.940 "generate_uuids": false, 00:14:06.940 "transport_tos": 0, 00:14:06.940 "nvme_error_stat": false, 00:14:06.940 "rdma_srq_size": 0, 00:14:06.940 "io_path_stat": false, 00:14:06.940 "allow_accel_sequence": false, 00:14:06.940 "rdma_max_cq_size": 0, 00:14:06.940 "rdma_cm_event_timeout_ms": 0, 00:14:06.940 "dhchap_digests": [ 00:14:06.940 "sha256", 00:14:06.940 "sha384", 00:14:06.940 "sha512" 00:14:06.940 ], 00:14:06.940 "dhchap_dhgroups": [ 00:14:06.940 "null", 00:14:06.940 "ffdhe2048", 00:14:06.940 "ffdhe3072", 00:14:06.940 "ffdhe4096", 00:14:06.940 "ffdhe6144", 00:14:06.940 "ffdhe8192" 00:14:06.940 ] 00:14:06.940 } 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "method": "bdev_nvme_set_hotplug", 00:14:06.940 "params": { 00:14:06.940 "period_us": 100000, 00:14:06.940 "enable": false 00:14:06.940 } 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "method": "bdev_malloc_create", 00:14:06.940 "params": { 00:14:06.940 "name": "malloc0", 00:14:06.940 "num_blocks": 8192, 00:14:06.940 "block_size": 4096, 00:14:06.940 "physical_block_size": 4096, 00:14:06.940 "uuid": "e3087c4c-3035-41fc-835f-df48037b9d61", 00:14:06.940 "optimal_io_boundary": 0, 00:14:06.940 "md_size": 0, 00:14:06.940 "dif_type": 0, 00:14:06.940 "dif_is_head_of_md": false, 00:14:06.940 "dif_pi_format": 0 00:14:06.940 } 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "method": "bdev_wait_for_examine" 00:14:06.940 } 00:14:06.940 ] 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "scsi", 00:14:06.940 "config": null 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "scheduler", 00:14:06.940 "config": [ 00:14:06.940 { 00:14:06.940 "method": "framework_set_scheduler", 00:14:06.940 "params": { 00:14:06.940 "name": "static" 00:14:06.940 } 00:14:06.940 } 00:14:06.940 ] 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "vhost_scsi", 00:14:06.940 "config": [] 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "vhost_blk", 00:14:06.940 "config": [] 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "ublk", 00:14:06.940 "config": [ 00:14:06.940 { 00:14:06.940 "method": "ublk_create_target", 00:14:06.940 "params": { 00:14:06.940 "cpumask": "1" 00:14:06.940 } 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "method": "ublk_start_disk", 00:14:06.940 "params": { 00:14:06.940 "bdev_name": "malloc0", 00:14:06.940 "ublk_id": 0, 00:14:06.940 "num_queues": 1, 00:14:06.940 "queue_depth": 128 00:14:06.940 } 00:14:06.940 } 00:14:06.940 ] 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "nbd", 00:14:06.940 "config": [] 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "nvmf", 00:14:06.940 "config": [ 00:14:06.940 { 00:14:06.940 "method": "nvmf_set_config", 00:14:06.940 "params": { 00:14:06.940 "discovery_filter": "match_any", 00:14:06.940 "admin_cmd_passthru": { 00:14:06.940 "identify_ctrlr": false 00:14:06.940 }, 00:14:06.940 "dhchap_digests": [ 00:14:06.940 "sha256", 00:14:06.940 "sha384", 00:14:06.940 "sha512" 00:14:06.940 ], 00:14:06.940 "dhchap_dhgroups": [ 00:14:06.940 "null", 00:14:06.940 "ffdhe2048", 00:14:06.940 "ffdhe3072", 00:14:06.940 "ffdhe4096", 00:14:06.940 "ffdhe6144", 00:14:06.940 "ffdhe8192" 00:14:06.940 ] 00:14:06.940 } 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "method": "nvmf_set_max_subsystems", 00:14:06.940 "params": { 00:14:06.940 "max_subsystems": 1024 00:14:06.940 } 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "method": "nvmf_set_crdt", 00:14:06.940 "params": { 00:14:06.940 "crdt1": 0, 00:14:06.940 "crdt2": 0, 00:14:06.940 "crdt3": 0 00:14:06.940 } 00:14:06.940 } 00:14:06.940 ] 00:14:06.940 }, 00:14:06.940 { 00:14:06.940 "subsystem": "iscsi", 00:14:06.940 "config": [ 00:14:06.940 { 00:14:06.940 "method": "iscsi_set_options", 00:14:06.940 "params": { 00:14:06.940 "node_base": "iqn.2016-06.io.spdk", 00:14:06.940 "max_sessions": 128, 00:14:06.940 "max_connections_per_session": 2, 00:14:06.940 "max_queue_depth": 64, 00:14:06.940 "default_time2wait": 2, 00:14:06.940 "default_time2retain": 20, 00:14:06.940 "first_burst_length": 8192, 00:14:06.940 "immediate_data": true, 00:14:06.940 "allow_duplicated_isid": false, 00:14:06.940 "error_recovery_level": 0, 00:14:06.940 "nop_timeout": 60, 00:14:06.940 "nop_in_interval": 30, 00:14:06.940 "disable_chap": false, 00:14:06.940 "require_chap": false, 00:14:06.940 "mutual_chap": false, 00:14:06.940 "chap_group": 0, 00:14:06.940 "max_large_datain_per_connection": 64, 00:14:06.940 "max_r2t_per_connection": 4, 00:14:06.940 "pdu_pool_size": 36864, 00:14:06.940 "immediate_data_pool_size": 16384, 00:14:06.940 "data_out_pool_size": 2048 00:14:06.940 } 00:14:06.940 } 00:14:06.940 ] 00:14:06.940 } 00:14:06.940 ] 00:14:06.940 }' 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71773 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 71773 ']' 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 71773 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71773 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.940 killing process with pid 71773 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71773' 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 71773 00:14:06.940 08:14:11 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 71773 00:14:08.319 [2024-11-17 08:14:12.904724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:08.319 [2024-11-17 08:14:12.936203] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:08.319 [2024-11-17 08:14:12.936322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:08.319 [2024-11-17 08:14:12.943257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:08.319 [2024-11-17 08:14:12.943352] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:08.319 [2024-11-17 08:14:12.943388] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:08.319 [2024-11-17 08:14:12.943417] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:08.319 [2024-11-17 08:14:12.943592] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:09.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71832 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71832 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 71832 ']' 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.699 08:14:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:09.699 "subsystems": [ 00:14:09.699 { 00:14:09.699 "subsystem": "fsdev", 00:14:09.699 "config": [ 00:14:09.699 { 00:14:09.699 "method": "fsdev_set_opts", 00:14:09.699 "params": { 00:14:09.699 "fsdev_io_pool_size": 65535, 00:14:09.699 "fsdev_io_cache_size": 256 00:14:09.699 } 00:14:09.699 } 00:14:09.699 ] 00:14:09.699 }, 00:14:09.699 { 00:14:09.699 "subsystem": "keyring", 00:14:09.699 "config": [] 00:14:09.699 }, 00:14:09.699 { 00:14:09.699 "subsystem": "iobuf", 00:14:09.699 "config": [ 00:14:09.699 { 00:14:09.699 "method": "iobuf_set_options", 00:14:09.699 "params": { 00:14:09.699 "small_pool_count": 8192, 00:14:09.700 "large_pool_count": 1024, 00:14:09.700 "small_bufsize": 8192, 00:14:09.700 "large_bufsize": 135168, 00:14:09.700 "enable_numa": false 00:14:09.700 } 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "sock", 00:14:09.700 "config": [ 00:14:09.700 { 00:14:09.700 "method": "sock_set_default_impl", 00:14:09.700 "params": { 00:14:09.700 "impl_name": "posix" 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "sock_impl_set_options", 00:14:09.700 "params": { 00:14:09.700 "impl_name": "ssl", 00:14:09.700 "recv_buf_size": 4096, 00:14:09.700 "send_buf_size": 4096, 00:14:09.700 "enable_recv_pipe": true, 00:14:09.700 "enable_quickack": false, 00:14:09.700 "enable_placement_id": 0, 00:14:09.700 "enable_zerocopy_send_server": true, 00:14:09.700 "enable_zerocopy_send_client": false, 00:14:09.700 "zerocopy_threshold": 0, 00:14:09.700 "tls_version": 0, 00:14:09.700 "enable_ktls": false 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "sock_impl_set_options", 00:14:09.700 "params": { 00:14:09.700 "impl_name": "posix", 00:14:09.700 "recv_buf_size": 2097152, 00:14:09.700 "send_buf_size": 2097152, 00:14:09.700 "enable_recv_pipe": true, 00:14:09.700 "enable_quickack": false, 00:14:09.700 "enable_placement_id": 0, 00:14:09.700 "enable_zerocopy_send_server": true, 00:14:09.700 "enable_zerocopy_send_client": false, 00:14:09.700 "zerocopy_threshold": 0, 00:14:09.700 "tls_version": 0, 00:14:09.700 "enable_ktls": false 00:14:09.700 } 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "vmd", 00:14:09.700 "config": [] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "accel", 00:14:09.700 "config": [ 00:14:09.700 { 00:14:09.700 "method": "accel_set_options", 00:14:09.700 "params": { 00:14:09.700 "small_cache_size": 128, 00:14:09.700 "large_cache_size": 16, 00:14:09.700 "task_count": 2048, 00:14:09.700 "sequence_count": 2048, 00:14:09.700 "buf_count": 2048 00:14:09.700 } 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "bdev", 00:14:09.700 "config": [ 00:14:09.700 { 00:14:09.700 "method": "bdev_set_options", 00:14:09.700 "params": { 00:14:09.700 "bdev_io_pool_size": 65535, 00:14:09.700 "bdev_io_cache_size": 256, 00:14:09.700 "bdev_auto_examine": true, 00:14:09.700 "iobuf_small_cache_size": 128, 00:14:09.700 "iobuf_large_cache_size": 16 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "bdev_raid_set_options", 00:14:09.700 "params": { 00:14:09.700 "process_window_size_kb": 1024, 00:14:09.700 "process_max_bandwidth_mb_sec": 0 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "bdev_iscsi_set_options", 00:14:09.700 "params": { 00:14:09.700 "timeout_sec": 30 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "bdev_nvme_set_options", 00:14:09.700 "params": { 00:14:09.700 "action_on_timeout": "none", 00:14:09.700 "timeout_us": 0, 00:14:09.700 "timeout_admin_us": 0, 00:14:09.700 "keep_alive_timeout_ms": 10000, 00:14:09.700 "arbitration_burst": 0, 00:14:09.700 "low_priority_weight": 0, 00:14:09.700 "medium_priority_weight": 0, 00:14:09.700 "high_priority_weight": 0, 00:14:09.700 "nvme_adminq_poll_period_us": 10000, 00:14:09.700 "nvme_ioq_poll_period_us": 0, 00:14:09.700 "io_queue_requests": 0, 00:14:09.700 "delay_cmd_submit": true, 00:14:09.700 "transport_retry_count": 4, 00:14:09.700 "bdev_retry_count": 3, 00:14:09.700 "transport_ack_timeout": 0, 00:14:09.700 "ctrlr_loss_timeout_sec": 0, 00:14:09.700 "reconnect_delay_sec": 0, 00:14:09.700 "fast_io_fail_timeout_sec": 0, 00:14:09.700 "disable_auto_failback": false, 00:14:09.700 "generate_uuids": false, 00:14:09.700 "transport_tos": 0, 00:14:09.700 "nvme_error_stat": false, 00:14:09.700 "rdma_srq_size": 0, 00:14:09.700 "io_path_stat": false, 00:14:09.700 "allow_accel_sequence": false, 00:14:09.700 "rdma_max_cq_size": 0, 00:14:09.700 "rdma_cm_event_timeout_ms": 0, 00:14:09.700 "dhchap_digests": [ 00:14:09.700 "sha256", 00:14:09.700 "sha384", 00:14:09.700 "sha512" 00:14:09.700 ], 00:14:09.700 "dhchap_dhgroups": [ 00:14:09.700 "null", 00:14:09.700 "ffdhe2048", 00:14:09.700 "ffdhe3072", 00:14:09.700 "ffdhe4096", 00:14:09.700 "ffdhe6144", 00:14:09.700 "ffdhe8192" 00:14:09.700 ] 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "bdev_nvme_set_hotplug", 00:14:09.700 "params": { 00:14:09.700 "period_us": 100000, 00:14:09.700 "enable": false 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "bdev_malloc_create", 00:14:09.700 "params": { 00:14:09.700 "name": "malloc0", 00:14:09.700 "num_blocks": 8192, 00:14:09.700 "block_size": 4096, 00:14:09.700 "physical_block_size": 4096, 00:14:09.700 "uuid": "e3087c4c-3035-41fc-835f-df48037b9d61", 00:14:09.700 "optimal_io_boundary": 0, 00:14:09.700 "md_size": 0, 00:14:09.700 "dif_type": 0, 00:14:09.700 "dif_is_head_of_md": false, 00:14:09.700 "dif_pi_format": 0 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "bdev_wait_for_examine" 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "scsi", 00:14:09.700 "config": null 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "scheduler", 00:14:09.700 "config": [ 00:14:09.700 { 00:14:09.700 "method": "framework_set_scheduler", 00:14:09.700 "params": { 00:14:09.700 "name": "static" 00:14:09.700 } 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "vhost_scsi", 00:14:09.700 "config": [] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "vhost_blk", 00:14:09.700 "config": [] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "ublk", 00:14:09.700 "config": [ 00:14:09.700 { 00:14:09.700 "method": "ublk_create_target", 00:14:09.700 "params": { 00:14:09.700 "cpumask": "1" 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "ublk_start_disk", 00:14:09.700 "params": { 00:14:09.700 "bdev_name": "malloc0", 00:14:09.700 "ublk_id": 0, 00:14:09.700 "num_queues": 1, 00:14:09.700 "queue_depth": 128 00:14:09.700 } 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "nbd", 00:14:09.700 "config": [] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "nvmf", 00:14:09.700 "config": [ 00:14:09.700 { 00:14:09.700 "method": "nvmf_set_config", 00:14:09.700 "params": { 00:14:09.700 "discovery_filter": "match_any", 00:14:09.700 "admin_cmd_passthru": { 00:14:09.700 "identify_ctrlr": false 00:14:09.700 }, 00:14:09.700 "dhchap_digests": [ 00:14:09.700 "sha256", 00:14:09.700 "sha384", 00:14:09.700 "sha512" 00:14:09.700 ], 00:14:09.700 "dhchap_dhgroups": [ 00:14:09.700 "null", 00:14:09.700 "ffdhe2048", 00:14:09.700 "ffdhe3072", 00:14:09.700 "ffdhe4096", 00:14:09.700 "ffdhe6144", 00:14:09.700 "ffdhe8192" 00:14:09.700 ] 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "nvmf_set_max_subsystems", 00:14:09.700 "params": { 00:14:09.700 "max_subsystems": 1024 00:14:09.700 } 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "method": "nvmf_set_crdt", 00:14:09.700 "params": { 00:14:09.700 "crdt1": 0, 00:14:09.700 "crdt2": 0, 00:14:09.700 "crdt3": 0 00:14:09.700 } 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }, 00:14:09.700 { 00:14:09.700 "subsystem": "iscsi", 00:14:09.700 "config": [ 00:14:09.700 { 00:14:09.700 "method": "iscsi_set_options", 00:14:09.700 "params": { 00:14:09.700 "node_base": "iqn.2016-06.io.spdk", 00:14:09.700 "max_sessions": 128, 00:14:09.700 "max_connections_per_session": 2, 00:14:09.700 "max_queue_depth": 64, 00:14:09.700 "default_time2wait": 2, 00:14:09.700 "default_time2retain": 20, 00:14:09.700 "first_burst_length": 8192, 00:14:09.700 "immediate_data": true, 00:14:09.700 "allow_duplicated_isid": false, 00:14:09.700 "error_recovery_level": 0, 00:14:09.700 "nop_timeout": 60, 00:14:09.700 "nop_in_interval": 30, 00:14:09.700 "disable_chap": false, 00:14:09.700 "require_chap": false, 00:14:09.700 "mutual_chap": false, 00:14:09.700 "chap_group": 0, 00:14:09.700 "max_large_datain_per_connection": 64, 00:14:09.700 "max_r2t_per_connection": 4, 00:14:09.700 "pdu_pool_size": 36864, 00:14:09.700 "immediate_data_pool_size": 16384, 00:14:09.700 "data_out_pool_size": 2048 00:14:09.700 } 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 } 00:14:09.700 ] 00:14:09.700 }' 00:14:09.701 08:14:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:09.701 [2024-11-17 08:14:14.500468] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:09.701 [2024-11-17 08:14:14.500641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71832 ] 00:14:09.701 [2024-11-17 08:14:14.673666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.960 [2024-11-17 08:14:14.753139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.529 [2024-11-17 08:14:15.516171] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:10.529 [2024-11-17 08:14:15.517046] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:10.529 [2024-11-17 08:14:15.524285] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:10.529 [2024-11-17 08:14:15.524367] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:10.529 [2024-11-17 08:14:15.524381] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:10.529 [2024-11-17 08:14:15.524389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:10.529 [2024-11-17 08:14:15.532254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:10.529 [2024-11-17 08:14:15.532278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:10.529 [2024-11-17 08:14:15.539197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:10.529 [2024-11-17 08:14:15.539305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:10.789 [2024-11-17 08:14:15.555148] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71832 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 71832 ']' 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 71832 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71832 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:10.789 killing process with pid 71832 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71832' 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 71832 00:14:10.789 08:14:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 71832 00:14:12.168 [2024-11-17 08:14:17.149947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:12.427 [2024-11-17 08:14:17.183151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:12.427 [2024-11-17 08:14:17.183303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:12.427 [2024-11-17 08:14:17.192129] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:12.427 [2024-11-17 08:14:17.192211] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:12.427 [2024-11-17 08:14:17.192221] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:12.427 [2024-11-17 08:14:17.192311] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:12.427 [2024-11-17 08:14:17.192507] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:14.345 08:14:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:14.345 00:14:14.345 real 0m8.745s 00:14:14.345 user 0m6.170s 00:14:14.345 sys 0m3.498s 00:14:14.345 08:14:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.345 08:14:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:14.345 ************************************ 00:14:14.345 END TEST test_save_ublk_config 00:14:14.345 ************************************ 00:14:14.345 08:14:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71917 00:14:14.345 08:14:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.345 08:14:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71917 00:14:14.345 08:14:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:14.345 08:14:19 ublk -- common/autotest_common.sh@835 -- # '[' -z 71917 ']' 00:14:14.345 08:14:19 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.345 08:14:19 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.345 08:14:19 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.345 08:14:19 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.345 08:14:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:14.345 [2024-11-17 08:14:19.174209] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:14.345 [2024-11-17 08:14:19.174402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71917 ] 00:14:14.345 [2024-11-17 08:14:19.344697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:14.607 [2024-11-17 08:14:19.425269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.607 [2024-11-17 08:14:19.425324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.175 08:14:20 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.175 08:14:20 ublk -- common/autotest_common.sh@868 -- # return 0 00:14:15.175 08:14:20 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:15.175 08:14:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:15.175 08:14:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.175 08:14:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.175 ************************************ 00:14:15.175 START TEST test_create_ublk 00:14:15.175 ************************************ 00:14:15.175 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:14:15.175 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:15.175 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.175 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.175 [2024-11-17 08:14:20.075180] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:15.175 [2024-11-17 08:14:20.077584] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:15.175 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.175 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:15.175 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:15.175 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.175 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.434 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:15.434 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.434 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.434 [2024-11-17 08:14:20.288339] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:15.434 [2024-11-17 08:14:20.288809] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:15.434 [2024-11-17 08:14:20.288833] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:15.434 [2024-11-17 08:14:20.288844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:15.434 [2024-11-17 08:14:20.297428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:15.434 [2024-11-17 08:14:20.297453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.434 [2024-11-17 08:14:20.304208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.434 [2024-11-17 08:14:20.319201] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:15.434 [2024-11-17 08:14:20.332313] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.434 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:15.434 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.434 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.434 08:14:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:15.434 { 00:14:15.434 "ublk_device": "/dev/ublkb0", 00:14:15.434 "id": 0, 00:14:15.434 "queue_depth": 512, 00:14:15.434 "num_queues": 4, 00:14:15.434 "bdev_name": "Malloc0" 00:14:15.434 } 00:14:15.434 ]' 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:15.434 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:15.693 08:14:20 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:15.693 08:14:20 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:15.952 fio: verification read phase will never start because write phase uses all of runtime 00:14:15.952 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:15.952 fio-3.35 00:14:15.952 Starting 1 process 00:14:26.041 00:14:26.041 fio_test: (groupid=0, jobs=1): err= 0: pid=71958: Sun Nov 17 08:14:30 2024 00:14:26.041 write: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(537MiB/10001msec); 0 zone resets 00:14:26.041 clat (usec): min=45, max=4040, avg=71.68, stdev=121.18 00:14:26.041 lat (usec): min=45, max=4040, avg=72.28, stdev=121.20 00:14:26.041 clat percentiles (usec): 00:14:26.041 | 1.00th=[ 53], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:14:26.041 | 30.00th=[ 63], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 64], 00:14:26.041 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 76], 95.00th=[ 84], 00:14:26.041 | 99.00th=[ 108], 99.50th=[ 129], 99.90th=[ 2540], 99.95th=[ 3064], 00:14:26.041 | 99.99th=[ 3752] 00:14:26.041 bw ( KiB/s): min=54088, max=56312, per=100.00%, avg=55059.42, stdev=623.27, samples=19 00:14:26.041 iops : min=13522, max=14078, avg=13764.84, stdev=155.81, samples=19 00:14:26.041 lat (usec) : 50=0.22%, 100=98.32%, 250=1.10%, 500=0.08%, 750=0.02% 00:14:26.041 lat (usec) : 1000=0.02% 00:14:26.041 lat (msec) : 2=0.09%, 4=0.15%, 10=0.01% 00:14:26.041 cpu : usr=2.26%, sys=7.24%, ctx=137434, majf=0, minf=797 00:14:26.041 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.041 issued rwts: total=0,137425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.041 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:26.041 00:14:26.041 Run status group 0 (all jobs): 00:14:26.041 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=537MiB (563MB), run=10001-10001msec 00:14:26.041 00:14:26.041 Disk stats (read/write): 00:14:26.041 ublkb0: ios=0/136006, merge=0/0, ticks=0/9026, in_queue=9026, util=99.09% 00:14:26.041 08:14:30 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.041 [2024-11-17 08:14:30.871169] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:26.041 [2024-11-17 08:14:30.914515] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:26.041 [2024-11-17 08:14:30.915549] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:26.041 [2024-11-17 08:14:30.922279] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:26.041 [2024-11-17 08:14:30.922694] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:26.041 [2024-11-17 08:14:30.922717] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.041 08:14:30 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.041 [2024-11-17 08:14:30.938230] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:26.041 request: 00:14:26.041 { 00:14:26.041 "ublk_id": 0, 00:14:26.041 "method": "ublk_stop_disk", 00:14:26.041 "req_id": 1 00:14:26.041 } 00:14:26.041 Got JSON-RPC error response 00:14:26.041 response: 00:14:26.041 { 00:14:26.041 "code": -19, 00:14:26.041 "message": "No such device" 00:14:26.041 } 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.041 08:14:30 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.041 [2024-11-17 08:14:30.953278] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:26.041 [2024-11-17 08:14:30.961108] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:26.041 [2024-11-17 08:14:30.961168] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.041 08:14:30 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.041 08:14:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.609 08:14:31 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:26.609 08:14:31 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:26.609 00:14:26.609 real 0m11.522s 00:14:26.609 user 0m0.687s 00:14:26.609 sys 0m0.809s 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.609 ************************************ 00:14:26.609 END TEST test_create_ublk 00:14:26.609 08:14:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.609 ************************************ 00:14:26.869 08:14:31 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:26.869 08:14:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.869 08:14:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.869 08:14:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.869 ************************************ 00:14:26.869 START TEST test_create_multi_ublk 00:14:26.869 ************************************ 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.869 [2024-11-17 08:14:31.650158] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:26.869 [2024-11-17 08:14:31.652543] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.869 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.128 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.128 08:14:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:27.128 08:14:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:27.128 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.128 08:14:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.128 [2024-11-17 08:14:31.939341] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:27.128 [2024-11-17 08:14:31.939851] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:27.128 [2024-11-17 08:14:31.939872] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:27.128 [2024-11-17 08:14:31.939886] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.128 [2024-11-17 08:14:31.950214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.128 [2024-11-17 08:14:31.950241] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.128 [2024-11-17 08:14:31.957266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.128 [2024-11-17 08:14:31.958061] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:27.128 [2024-11-17 08:14:31.997151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.128 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.128 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:27.128 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.128 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:27.128 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.128 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.387 [2024-11-17 08:14:32.220366] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:27.387 [2024-11-17 08:14:32.220869] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:27.387 [2024-11-17 08:14:32.220924] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:27.387 [2024-11-17 08:14:32.220934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.387 [2024-11-17 08:14:32.228365] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.387 [2024-11-17 08:14:32.228392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.387 [2024-11-17 08:14:32.236259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.387 [2024-11-17 08:14:32.237045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:27.387 [2024-11-17 08:14:32.245247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.387 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.647 [2024-11-17 08:14:32.459308] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:27.647 [2024-11-17 08:14:32.459851] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:27.647 [2024-11-17 08:14:32.459872] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:27.647 [2024-11-17 08:14:32.459883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.647 [2024-11-17 08:14:32.467291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.647 [2024-11-17 08:14:32.467322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.647 [2024-11-17 08:14:32.474227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.647 [2024-11-17 08:14:32.475047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:27.647 [2024-11-17 08:14:32.483217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.647 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.906 [2024-11-17 08:14:32.706360] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:27.906 [2024-11-17 08:14:32.706894] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:27.906 [2024-11-17 08:14:32.706919] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:27.906 [2024-11-17 08:14:32.706929] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.906 [2024-11-17 08:14:32.714248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.906 [2024-11-17 08:14:32.714275] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.906 [2024-11-17 08:14:32.721212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.906 [2024-11-17 08:14:32.721953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:27.906 [2024-11-17 08:14:32.730308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:27.906 { 00:14:27.906 "ublk_device": "/dev/ublkb0", 00:14:27.906 "id": 0, 00:14:27.906 "queue_depth": 512, 00:14:27.906 "num_queues": 4, 00:14:27.906 "bdev_name": "Malloc0" 00:14:27.906 }, 00:14:27.906 { 00:14:27.906 "ublk_device": "/dev/ublkb1", 00:14:27.906 "id": 1, 00:14:27.906 "queue_depth": 512, 00:14:27.906 "num_queues": 4, 00:14:27.906 "bdev_name": "Malloc1" 00:14:27.906 }, 00:14:27.906 { 00:14:27.906 "ublk_device": "/dev/ublkb2", 00:14:27.906 "id": 2, 00:14:27.906 "queue_depth": 512, 00:14:27.906 "num_queues": 4, 00:14:27.906 "bdev_name": "Malloc2" 00:14:27.906 }, 00:14:27.906 { 00:14:27.906 "ublk_device": "/dev/ublkb3", 00:14:27.906 "id": 3, 00:14:27.906 "queue_depth": 512, 00:14:27.906 "num_queues": 4, 00:14:27.906 "bdev_name": "Malloc3" 00:14:27.906 } 00:14:27.906 ]' 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:27.906 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:28.165 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.165 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:28.165 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.165 08:14:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:28.165 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:28.165 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.165 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:28.165 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:28.165 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:28.165 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:28.165 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:28.424 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:28.683 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 [2024-11-17 08:14:33.815398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.942 [2024-11-17 08:14:33.855733] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.942 [2024-11-17 08:14:33.856844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.942 [2024-11-17 08:14:33.862240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.942 [2024-11-17 08:14:33.862612] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:28.942 [2024-11-17 08:14:33.862640] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 [2024-11-17 08:14:33.877234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.942 [2024-11-17 08:14:33.916238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.942 [2024-11-17 08:14:33.917220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.942 [2024-11-17 08:14:33.924294] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.942 [2024-11-17 08:14:33.924665] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:28.942 [2024-11-17 08:14:33.924688] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.942 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 [2024-11-17 08:14:33.939253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.201 [2024-11-17 08:14:33.975146] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.201 [2024-11-17 08:14:33.976145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.201 [2024-11-17 08:14:33.982291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.201 [2024-11-17 08:14:33.982630] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:29.201 [2024-11-17 08:14:33.982653] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:29.201 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.201 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.201 08:14:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:29.201 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.201 08:14:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.201 [2024-11-17 08:14:33.997228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.201 [2024-11-17 08:14:34.027758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.201 [2024-11-17 08:14:34.028757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.201 [2024-11-17 08:14:34.035322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.201 [2024-11-17 08:14:34.035685] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:29.201 [2024-11-17 08:14:34.035736] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:29.201 08:14:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.201 08:14:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:29.470 [2024-11-17 08:14:34.328236] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.470 [2024-11-17 08:14:34.336176] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:29.470 [2024-11-17 08:14:34.336229] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:29.470 08:14:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:29.470 08:14:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:29.470 08:14:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:29.470 08:14:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.470 08:14:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.038 08:14:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.038 08:14:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.038 08:14:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:30.038 08:14:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.038 08:14:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.297 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.297 08:14:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.297 08:14:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:30.297 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.297 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.556 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.556 08:14:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.556 08:14:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:30.556 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.556 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:30.815 ************************************ 00:14:30.815 END TEST test_create_multi_ublk 00:14:30.815 ************************************ 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:30.815 00:14:30.815 real 0m4.169s 00:14:30.815 user 0m1.372s 00:14:30.815 sys 0m0.159s 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.815 08:14:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.074 08:14:35 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:31.074 08:14:35 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:31.074 08:14:35 ublk -- ublk/ublk.sh@130 -- # killprocess 71917 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@954 -- # '[' -z 71917 ']' 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@958 -- # kill -0 71917 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@959 -- # uname 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71917 00:14:31.074 killing process with pid 71917 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71917' 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@973 -- # kill 71917 00:14:31.074 08:14:35 ublk -- common/autotest_common.sh@978 -- # wait 71917 00:14:31.643 [2024-11-17 08:14:36.644054] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.643 [2024-11-17 08:14:36.644396] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:32.579 00:14:32.579 real 0m27.507s 00:14:32.579 user 0m39.451s 00:14:32.579 sys 0m10.184s 00:14:32.579 08:14:37 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.579 ************************************ 00:14:32.579 08:14:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.579 END TEST ublk 00:14:32.579 ************************************ 00:14:32.837 08:14:37 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:32.837 08:14:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.837 08:14:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.837 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:32.837 ************************************ 00:14:32.837 START TEST ublk_recovery 00:14:32.837 ************************************ 00:14:32.837 08:14:37 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:32.837 * Looking for test storage... 00:14:32.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:32.837 08:14:37 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:32.837 08:14:37 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:32.837 08:14:37 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:32.837 08:14:37 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:32.837 08:14:37 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.837 08:14:37 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.838 08:14:37 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.838 --rc genhtml_branch_coverage=1 00:14:32.838 --rc genhtml_function_coverage=1 00:14:32.838 --rc genhtml_legend=1 00:14:32.838 --rc geninfo_all_blocks=1 00:14:32.838 --rc geninfo_unexecuted_blocks=1 00:14:32.838 00:14:32.838 ' 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.838 --rc genhtml_branch_coverage=1 00:14:32.838 --rc genhtml_function_coverage=1 00:14:32.838 --rc genhtml_legend=1 00:14:32.838 --rc geninfo_all_blocks=1 00:14:32.838 --rc geninfo_unexecuted_blocks=1 00:14:32.838 00:14:32.838 ' 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.838 --rc genhtml_branch_coverage=1 00:14:32.838 --rc genhtml_function_coverage=1 00:14:32.838 --rc genhtml_legend=1 00:14:32.838 --rc geninfo_all_blocks=1 00:14:32.838 --rc geninfo_unexecuted_blocks=1 00:14:32.838 00:14:32.838 ' 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.838 --rc genhtml_branch_coverage=1 00:14:32.838 --rc genhtml_function_coverage=1 00:14:32.838 --rc genhtml_legend=1 00:14:32.838 --rc geninfo_all_blocks=1 00:14:32.838 --rc geninfo_unexecuted_blocks=1 00:14:32.838 00:14:32.838 ' 00:14:32.838 08:14:37 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:32.838 08:14:37 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:32.838 08:14:37 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:32.838 08:14:37 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=72322 00:14:32.838 08:14:37 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.838 08:14:37 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:32.838 08:14:37 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 72322 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 72322 ']' 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.838 08:14:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.096 [2024-11-17 08:14:37.908667] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:33.096 [2024-11-17 08:14:37.908794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72322 ] 00:14:33.096 [2024-11-17 08:14:38.076224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.355 [2024-11-17 08:14:38.160449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.355 [2024-11-17 08:14:38.160465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.923 08:14:38 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.923 08:14:38 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:33.923 08:14:38 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:33.923 08:14:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.923 08:14:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:33.923 [2024-11-17 08:14:38.920195] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:33.923 [2024-11-17 08:14:38.922386] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:33.923 08:14:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.923 08:14:38 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:33.923 08:14:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.923 08:14:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.182 malloc0 00:14:34.182 08:14:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.182 08:14:39 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:34.182 08:14:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.182 08:14:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:34.182 [2024-11-17 08:14:39.035298] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:34.182 [2024-11-17 08:14:39.035453] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:34.182 [2024-11-17 08:14:39.035486] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:34.182 [2024-11-17 08:14:39.035497] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:34.182 [2024-11-17 08:14:39.043258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:34.182 [2024-11-17 08:14:39.043287] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:34.182 [2024-11-17 08:14:39.051268] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:34.182 [2024-11-17 08:14:39.051474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:34.182 [2024-11-17 08:14:39.081259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:34.182 1 00:14:34.182 08:14:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.182 08:14:39 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:35.125 08:14:40 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=72357 00:14:35.125 08:14:40 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:35.125 08:14:40 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:35.383 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:35.383 fio-3.35 00:14:35.383 Starting 1 process 00:14:40.656 08:14:45 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 72322 00:14:40.656 08:14:45 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:45.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.929 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 72322 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:45.929 08:14:50 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=72468 00:14:45.929 08:14:50 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.929 08:14:50 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 72468 00:14:45.929 08:14:50 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:45.929 08:14:50 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 72468 ']' 00:14:45.929 08:14:50 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.929 08:14:50 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.929 08:14:50 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.929 08:14:50 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.929 08:14:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.929 [2024-11-17 08:14:50.231224] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:45.929 [2024-11-17 08:14:50.231411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72468 ] 00:14:45.929 [2024-11-17 08:14:50.416255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.929 [2024-11-17 08:14:50.523290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.929 [2024-11-17 08:14:50.523310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.188 08:14:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.188 08:14:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:46.188 08:14:51 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:46.188 08:14:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.188 08:14:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.188 [2024-11-17 08:14:51.164195] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:46.188 [2024-11-17 08:14:51.166761] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:46.188 08:14:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.188 08:14:51 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:46.188 08:14:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.188 08:14:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.448 malloc0 00:14:46.448 08:14:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.448 08:14:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:46.448 08:14:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.448 08:14:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.448 [2024-11-17 08:14:51.281395] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:46.448 [2024-11-17 08:14:51.281455] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:46.448 [2024-11-17 08:14:51.281471] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:46.448 [2024-11-17 08:14:51.289220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:46.448 [2024-11-17 08:14:51.289266] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:46.448 [2024-11-17 08:14:51.289280] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:46.448 [2024-11-17 08:14:51.289371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:46.448 1 00:14:46.448 08:14:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.448 08:14:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 72357 00:14:46.448 [2024-11-17 08:14:51.296221] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:46.448 [2024-11-17 08:14:51.302522] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:46.448 [2024-11-17 08:14:51.310386] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:46.448 [2024-11-17 08:14:51.310437] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:42.687 00:15:42.687 fio_test: (groupid=0, jobs=1): err= 0: pid=72360: Sun Nov 17 08:15:40 2024 00:15:42.687 read: IOPS=21.3k, BW=83.4MiB/s (87.4MB/s)(5004MiB/60002msec) 00:15:42.687 slat (usec): min=2, max=247, avg= 5.65, stdev= 2.53 00:15:42.687 clat (usec): min=1024, max=6220.8k, avg=2914.09, stdev=41838.53 00:15:42.687 lat (usec): min=1028, max=6220.8k, avg=2919.74, stdev=41838.53 00:15:42.687 clat percentiles (usec): 00:15:42.687 | 1.00th=[ 2180], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2409], 00:15:42.687 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2540], 00:15:42.687 | 70.00th=[ 2573], 80.00th=[ 2671], 90.00th=[ 2868], 95.00th=[ 3490], 00:15:42.687 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 6783], 99.95th=[ 7635], 00:15:42.687 | 99.99th=[13173] 00:15:42.687 bw ( KiB/s): min= 464, max=99488, per=100.00%, avg=94081.45, stdev=11903.73, samples=108 00:15:42.687 iops : min= 116, max=24872, avg=23520.35, stdev=2975.93, samples=108 00:15:42.687 write: IOPS=21.3k, BW=83.3MiB/s (87.4MB/s)(4999MiB/60002msec); 0 zone resets 00:15:42.687 slat (usec): min=2, max=338, avg= 5.90, stdev= 2.62 00:15:42.687 clat (usec): min=820, max=6221.1k, avg=3071.15, stdev=45984.23 00:15:42.687 lat (usec): min=832, max=6221.1k, avg=3077.05, stdev=45984.22 00:15:42.687 clat percentiles (usec): 00:15:42.687 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2507], 00:15:42.687 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2638], 00:15:42.687 | 70.00th=[ 2704], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3425], 00:15:42.687 | 99.00th=[ 5211], 99.50th=[ 5866], 99.90th=[ 6849], 99.95th=[ 7701], 00:15:42.687 | 99.99th=[13173] 00:15:42.687 bw ( KiB/s): min= 560, max=99584, per=100.00%, avg=94015.92, stdev=11817.43, samples=108 00:15:42.687 iops : min= 140, max=24896, avg=23503.95, stdev=2954.36, samples=108 00:15:42.687 lat (usec) : 1000=0.01% 00:15:42.687 lat (msec) : 2=0.34%, 4=96.34%, 10=3.30%, 20=0.02%, >=2000=0.01% 00:15:42.687 cpu : usr=10.72%, sys=22.89%, ctx=76072, majf=0, minf=13 00:15:42.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:42.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.687 issued rwts: total=1280983,1279838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.687 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.687 00:15:42.687 Run status group 0 (all jobs): 00:15:42.687 READ: bw=83.4MiB/s (87.4MB/s), 83.4MiB/s-83.4MiB/s (87.4MB/s-87.4MB/s), io=5004MiB (5247MB), run=60002-60002msec 00:15:42.687 WRITE: bw=83.3MiB/s (87.4MB/s), 83.3MiB/s-83.3MiB/s (87.4MB/s-87.4MB/s), io=4999MiB (5242MB), run=60002-60002msec 00:15:42.687 00:15:42.687 Disk stats (read/write): 00:15:42.687 ublkb1: ios=1278294/1277199, merge=0/0, ticks=3623139/3690065, in_queue=7313204, util=99.92% 00:15:42.687 08:15:40 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:42.687 08:15:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.687 08:15:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.687 [2024-11-17 08:15:40.359288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:42.687 [2024-11-17 08:15:40.408173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:42.688 [2024-11-17 08:15:40.408367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:42.688 [2024-11-17 08:15:40.416319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:42.688 [2024-11-17 08:15:40.416631] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:42.688 [2024-11-17 08:15:40.416820] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.688 08:15:40 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.688 [2024-11-17 08:15:40.430237] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:42.688 [2024-11-17 08:15:40.439099] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:42.688 [2024-11-17 08:15:40.439158] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.688 08:15:40 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:42.688 08:15:40 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:42.688 08:15:40 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 72468 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 72468 ']' 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 72468 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72468 00:15:42.688 killing process with pid 72468 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72468' 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@973 -- # kill 72468 00:15:42.688 08:15:40 ublk_recovery -- common/autotest_common.sh@978 -- # wait 72468 00:15:42.688 [2024-11-17 08:15:41.714046] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:42.688 [2024-11-17 08:15:41.714127] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:42.688 00:15:42.688 real 1m5.097s 00:15:42.688 user 1m49.026s 00:15:42.688 sys 0m30.662s 00:15:42.688 08:15:42 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.688 08:15:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.688 ************************************ 00:15:42.688 END TEST ublk_recovery 00:15:42.688 ************************************ 00:15:42.688 08:15:42 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:15:42.688 08:15:42 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:42.688 08:15:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:42.688 08:15:42 -- common/autotest_common.sh@10 -- # set +x 00:15:42.688 08:15:42 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:15:42.688 08:15:42 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:42.688 08:15:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:42.688 08:15:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.688 08:15:42 -- common/autotest_common.sh@10 -- # set +x 00:15:42.688 ************************************ 00:15:42.688 START TEST ftl 00:15:42.688 ************************************ 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:42.688 * Looking for test storage... 00:15:42.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.688 08:15:42 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.688 08:15:42 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.688 08:15:42 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.688 08:15:42 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.688 08:15:42 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.688 08:15:42 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.688 08:15:42 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.688 08:15:42 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:42.688 08:15:42 ftl -- scripts/common.sh@345 -- # : 1 00:15:42.688 08:15:42 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.688 08:15:42 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.688 08:15:42 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:42.688 08:15:42 ftl -- scripts/common.sh@353 -- # local d=1 00:15:42.688 08:15:42 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.688 08:15:42 ftl -- scripts/common.sh@355 -- # echo 1 00:15:42.688 08:15:42 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.688 08:15:42 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@353 -- # local d=2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.688 08:15:42 ftl -- scripts/common.sh@355 -- # echo 2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.688 08:15:42 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.688 08:15:42 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.688 08:15:42 ftl -- scripts/common.sh@368 -- # return 0 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.688 --rc genhtml_branch_coverage=1 00:15:42.688 --rc genhtml_function_coverage=1 00:15:42.688 --rc genhtml_legend=1 00:15:42.688 --rc geninfo_all_blocks=1 00:15:42.688 --rc geninfo_unexecuted_blocks=1 00:15:42.688 00:15:42.688 ' 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.688 --rc genhtml_branch_coverage=1 00:15:42.688 --rc genhtml_function_coverage=1 00:15:42.688 --rc genhtml_legend=1 00:15:42.688 --rc geninfo_all_blocks=1 00:15:42.688 --rc geninfo_unexecuted_blocks=1 00:15:42.688 00:15:42.688 ' 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.688 --rc genhtml_branch_coverage=1 00:15:42.688 --rc genhtml_function_coverage=1 00:15:42.688 --rc genhtml_legend=1 00:15:42.688 --rc geninfo_all_blocks=1 00:15:42.688 --rc geninfo_unexecuted_blocks=1 00:15:42.688 00:15:42.688 ' 00:15:42.688 08:15:42 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.688 --rc genhtml_branch_coverage=1 00:15:42.688 --rc genhtml_function_coverage=1 00:15:42.688 --rc genhtml_legend=1 00:15:42.688 --rc geninfo_all_blocks=1 00:15:42.688 --rc geninfo_unexecuted_blocks=1 00:15:42.688 00:15:42.688 ' 00:15:42.688 08:15:42 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:42.688 08:15:42 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:42.688 08:15:43 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.688 08:15:43 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:42.688 08:15:43 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:42.688 08:15:43 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:42.688 08:15:43 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.688 08:15:43 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:42.688 08:15:43 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:42.688 08:15:43 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.688 08:15:43 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.688 08:15:43 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:42.688 08:15:43 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:42.688 08:15:43 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.688 08:15:43 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:42.688 08:15:43 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:42.688 08:15:43 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:42.688 08:15:43 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.688 08:15:43 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.688 08:15:43 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:42.688 08:15:43 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:42.688 08:15:43 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.688 08:15:43 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:42.688 08:15:43 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.689 08:15:43 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:42.689 08:15:43 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:42.689 08:15:43 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:42.689 08:15:43 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.689 08:15:43 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:42.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:42.689 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.689 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.689 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.689 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73264 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:42.689 08:15:43 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73264 00:15:42.689 08:15:43 ftl -- common/autotest_common.sh@835 -- # '[' -z 73264 ']' 00:15:42.689 08:15:43 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.689 08:15:43 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.689 08:15:43 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.689 08:15:43 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.689 08:15:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:42.689 [2024-11-17 08:15:43.707143] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:42.689 [2024-11-17 08:15:43.708107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73264 ] 00:15:42.689 [2024-11-17 08:15:43.895979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.689 [2024-11-17 08:15:44.020302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.689 08:15:44 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.689 08:15:44 ftl -- common/autotest_common.sh@868 -- # return 0 00:15:42.689 08:15:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:42.689 08:15:44 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:42.689 08:15:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:42.689 08:15:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@50 -- # break 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@63 -- # break 00:15:42.689 08:15:46 ftl -- ftl/ftl.sh@66 -- # killprocess 73264 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 73264 ']' 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@958 -- # kill -0 73264 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@959 -- # uname 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73264 00:15:42.689 killing process with pid 73264 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73264' 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@973 -- # kill 73264 00:15:42.689 08:15:46 ftl -- common/autotest_common.sh@978 -- # wait 73264 00:15:43.627 08:15:48 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:43.627 08:15:48 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:43.627 08:15:48 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:43.627 08:15:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.627 08:15:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:43.627 ************************************ 00:15:43.627 START TEST ftl_fio_basic 00:15:43.627 ************************************ 00:15:43.627 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:43.627 * Looking for test storage... 00:15:43.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:43.627 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:43.627 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:15:43.627 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.886 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:43.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.887 --rc genhtml_branch_coverage=1 00:15:43.887 --rc genhtml_function_coverage=1 00:15:43.887 --rc genhtml_legend=1 00:15:43.887 --rc geninfo_all_blocks=1 00:15:43.887 --rc geninfo_unexecuted_blocks=1 00:15:43.887 00:15:43.887 ' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:43.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.887 --rc genhtml_branch_coverage=1 00:15:43.887 --rc genhtml_function_coverage=1 00:15:43.887 --rc genhtml_legend=1 00:15:43.887 --rc geninfo_all_blocks=1 00:15:43.887 --rc geninfo_unexecuted_blocks=1 00:15:43.887 00:15:43.887 ' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:43.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.887 --rc genhtml_branch_coverage=1 00:15:43.887 --rc genhtml_function_coverage=1 00:15:43.887 --rc genhtml_legend=1 00:15:43.887 --rc geninfo_all_blocks=1 00:15:43.887 --rc geninfo_unexecuted_blocks=1 00:15:43.887 00:15:43.887 ' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:43.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.887 --rc genhtml_branch_coverage=1 00:15:43.887 --rc genhtml_function_coverage=1 00:15:43.887 --rc genhtml_legend=1 00:15:43.887 --rc geninfo_all_blocks=1 00:15:43.887 --rc geninfo_unexecuted_blocks=1 00:15:43.887 00:15:43.887 ' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=73404 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 73404 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 73404 ']' 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.887 08:15:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:43.887 [2024-11-17 08:15:48.829923] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:43.887 [2024-11-17 08:15:48.830364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73404 ] 00:15:44.147 [2024-11-17 08:15:49.006444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:44.147 [2024-11-17 08:15:49.089694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.147 [2024-11-17 08:15:49.089809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.147 [2024-11-17 08:15:49.089828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:45.123 08:15:49 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:45.393 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:45.652 { 00:15:45.652 "name": "nvme0n1", 00:15:45.652 "aliases": [ 00:15:45.652 "68b25d8e-ffab-4315-b25f-1d836754b004" 00:15:45.652 ], 00:15:45.652 "product_name": "NVMe disk", 00:15:45.652 "block_size": 4096, 00:15:45.652 "num_blocks": 1310720, 00:15:45.652 "uuid": "68b25d8e-ffab-4315-b25f-1d836754b004", 00:15:45.652 "numa_id": -1, 00:15:45.652 "assigned_rate_limits": { 00:15:45.652 "rw_ios_per_sec": 0, 00:15:45.652 "rw_mbytes_per_sec": 0, 00:15:45.652 "r_mbytes_per_sec": 0, 00:15:45.652 "w_mbytes_per_sec": 0 00:15:45.652 }, 00:15:45.652 "claimed": false, 00:15:45.652 "zoned": false, 00:15:45.652 "supported_io_types": { 00:15:45.652 "read": true, 00:15:45.652 "write": true, 00:15:45.652 "unmap": true, 00:15:45.652 "flush": true, 00:15:45.652 "reset": true, 00:15:45.652 "nvme_admin": true, 00:15:45.652 "nvme_io": true, 00:15:45.652 "nvme_io_md": false, 00:15:45.652 "write_zeroes": true, 00:15:45.652 "zcopy": false, 00:15:45.652 "get_zone_info": false, 00:15:45.652 "zone_management": false, 00:15:45.652 "zone_append": false, 00:15:45.652 "compare": true, 00:15:45.652 "compare_and_write": false, 00:15:45.652 "abort": true, 00:15:45.652 "seek_hole": false, 00:15:45.652 "seek_data": false, 00:15:45.652 "copy": true, 00:15:45.652 "nvme_iov_md": false 00:15:45.652 }, 00:15:45.652 "driver_specific": { 00:15:45.652 "nvme": [ 00:15:45.652 { 00:15:45.652 "pci_address": "0000:00:11.0", 00:15:45.652 "trid": { 00:15:45.652 "trtype": "PCIe", 00:15:45.652 "traddr": "0000:00:11.0" 00:15:45.652 }, 00:15:45.652 "ctrlr_data": { 00:15:45.652 "cntlid": 0, 00:15:45.652 "vendor_id": "0x1b36", 00:15:45.652 "model_number": "QEMU NVMe Ctrl", 00:15:45.652 "serial_number": "12341", 00:15:45.652 "firmware_revision": "8.0.0", 00:15:45.652 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:45.652 "oacs": { 00:15:45.652 "security": 0, 00:15:45.652 "format": 1, 00:15:45.652 "firmware": 0, 00:15:45.652 "ns_manage": 1 00:15:45.652 }, 00:15:45.652 "multi_ctrlr": false, 00:15:45.652 "ana_reporting": false 00:15:45.652 }, 00:15:45.652 "vs": { 00:15:45.652 "nvme_version": "1.4" 00:15:45.652 }, 00:15:45.652 "ns_data": { 00:15:45.652 "id": 1, 00:15:45.652 "can_share": false 00:15:45.652 } 00:15:45.652 } 00:15:45.652 ], 00:15:45.652 "mp_policy": "active_passive" 00:15:45.652 } 00:15:45.652 } 00:15:45.652 ]' 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:45.652 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:45.965 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:45.965 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:45.965 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=004ab635-3d9b-4d76-9d3f-b3ea38eaa024 00:15:45.965 08:15:50 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 004ab635-3d9b-4d76-9d3f-b3ea38eaa024 00:15:46.223 08:15:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:46.483 { 00:15:46.483 "name": "4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc", 00:15:46.483 "aliases": [ 00:15:46.483 "lvs/nvme0n1p0" 00:15:46.483 ], 00:15:46.483 "product_name": "Logical Volume", 00:15:46.483 "block_size": 4096, 00:15:46.483 "num_blocks": 26476544, 00:15:46.483 "uuid": "4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc", 00:15:46.483 "assigned_rate_limits": { 00:15:46.483 "rw_ios_per_sec": 0, 00:15:46.483 "rw_mbytes_per_sec": 0, 00:15:46.483 "r_mbytes_per_sec": 0, 00:15:46.483 "w_mbytes_per_sec": 0 00:15:46.483 }, 00:15:46.483 "claimed": false, 00:15:46.483 "zoned": false, 00:15:46.483 "supported_io_types": { 00:15:46.483 "read": true, 00:15:46.483 "write": true, 00:15:46.483 "unmap": true, 00:15:46.483 "flush": false, 00:15:46.483 "reset": true, 00:15:46.483 "nvme_admin": false, 00:15:46.483 "nvme_io": false, 00:15:46.483 "nvme_io_md": false, 00:15:46.483 "write_zeroes": true, 00:15:46.483 "zcopy": false, 00:15:46.483 "get_zone_info": false, 00:15:46.483 "zone_management": false, 00:15:46.483 "zone_append": false, 00:15:46.483 "compare": false, 00:15:46.483 "compare_and_write": false, 00:15:46.483 "abort": false, 00:15:46.483 "seek_hole": true, 00:15:46.483 "seek_data": true, 00:15:46.483 "copy": false, 00:15:46.483 "nvme_iov_md": false 00:15:46.483 }, 00:15:46.483 "driver_specific": { 00:15:46.483 "lvol": { 00:15:46.483 "lvol_store_uuid": "004ab635-3d9b-4d76-9d3f-b3ea38eaa024", 00:15:46.483 "base_bdev": "nvme0n1", 00:15:46.483 "thin_provision": true, 00:15:46.483 "num_allocated_clusters": 0, 00:15:46.483 "snapshot": false, 00:15:46.483 "clone": false, 00:15:46.483 "esnap_clone": false 00:15:46.483 } 00:15:46.483 } 00:15:46.483 } 00:15:46.483 ]' 00:15:46.483 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:46.742 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:46.742 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:46.742 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:46.742 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:46.742 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:46.743 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:46.743 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:46.743 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:47.001 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:47.001 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:47.001 08:15:51 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:47.001 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:47.002 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:47.002 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:47.002 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:47.002 08:15:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:47.261 { 00:15:47.261 "name": "4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc", 00:15:47.261 "aliases": [ 00:15:47.261 "lvs/nvme0n1p0" 00:15:47.261 ], 00:15:47.261 "product_name": "Logical Volume", 00:15:47.261 "block_size": 4096, 00:15:47.261 "num_blocks": 26476544, 00:15:47.261 "uuid": "4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc", 00:15:47.261 "assigned_rate_limits": { 00:15:47.261 "rw_ios_per_sec": 0, 00:15:47.261 "rw_mbytes_per_sec": 0, 00:15:47.261 "r_mbytes_per_sec": 0, 00:15:47.261 "w_mbytes_per_sec": 0 00:15:47.261 }, 00:15:47.261 "claimed": false, 00:15:47.261 "zoned": false, 00:15:47.261 "supported_io_types": { 00:15:47.261 "read": true, 00:15:47.261 "write": true, 00:15:47.261 "unmap": true, 00:15:47.261 "flush": false, 00:15:47.261 "reset": true, 00:15:47.261 "nvme_admin": false, 00:15:47.261 "nvme_io": false, 00:15:47.261 "nvme_io_md": false, 00:15:47.261 "write_zeroes": true, 00:15:47.261 "zcopy": false, 00:15:47.261 "get_zone_info": false, 00:15:47.261 "zone_management": false, 00:15:47.261 "zone_append": false, 00:15:47.261 "compare": false, 00:15:47.261 "compare_and_write": false, 00:15:47.261 "abort": false, 00:15:47.261 "seek_hole": true, 00:15:47.261 "seek_data": true, 00:15:47.261 "copy": false, 00:15:47.261 "nvme_iov_md": false 00:15:47.261 }, 00:15:47.261 "driver_specific": { 00:15:47.261 "lvol": { 00:15:47.261 "lvol_store_uuid": "004ab635-3d9b-4d76-9d3f-b3ea38eaa024", 00:15:47.261 "base_bdev": "nvme0n1", 00:15:47.261 "thin_provision": true, 00:15:47.261 "num_allocated_clusters": 0, 00:15:47.261 "snapshot": false, 00:15:47.261 "clone": false, 00:15:47.261 "esnap_clone": false 00:15:47.261 } 00:15:47.261 } 00:15:47.261 } 00:15:47.261 ]' 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:47.261 08:15:52 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:47.520 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:15:47.520 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:15:47.521 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc 00:15:47.779 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:47.779 { 00:15:47.779 "name": "4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc", 00:15:47.779 "aliases": [ 00:15:47.779 "lvs/nvme0n1p0" 00:15:47.779 ], 00:15:47.779 "product_name": "Logical Volume", 00:15:47.779 "block_size": 4096, 00:15:47.779 "num_blocks": 26476544, 00:15:47.779 "uuid": "4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc", 00:15:47.779 "assigned_rate_limits": { 00:15:47.779 "rw_ios_per_sec": 0, 00:15:47.779 "rw_mbytes_per_sec": 0, 00:15:47.779 "r_mbytes_per_sec": 0, 00:15:47.779 "w_mbytes_per_sec": 0 00:15:47.779 }, 00:15:47.779 "claimed": false, 00:15:47.779 "zoned": false, 00:15:47.779 "supported_io_types": { 00:15:47.779 "read": true, 00:15:47.779 "write": true, 00:15:47.779 "unmap": true, 00:15:47.779 "flush": false, 00:15:47.779 "reset": true, 00:15:47.779 "nvme_admin": false, 00:15:47.779 "nvme_io": false, 00:15:47.779 "nvme_io_md": false, 00:15:47.779 "write_zeroes": true, 00:15:47.779 "zcopy": false, 00:15:47.779 "get_zone_info": false, 00:15:47.779 "zone_management": false, 00:15:47.779 "zone_append": false, 00:15:47.779 "compare": false, 00:15:47.779 "compare_and_write": false, 00:15:47.779 "abort": false, 00:15:47.779 "seek_hole": true, 00:15:47.779 "seek_data": true, 00:15:47.779 "copy": false, 00:15:47.779 "nvme_iov_md": false 00:15:47.779 }, 00:15:47.779 "driver_specific": { 00:15:47.779 "lvol": { 00:15:47.779 "lvol_store_uuid": "004ab635-3d9b-4d76-9d3f-b3ea38eaa024", 00:15:47.779 "base_bdev": "nvme0n1", 00:15:47.779 "thin_provision": true, 00:15:47.779 "num_allocated_clusters": 0, 00:15:47.779 "snapshot": false, 00:15:47.779 "clone": false, 00:15:47.779 "esnap_clone": false 00:15:47.779 } 00:15:47.779 } 00:15:47.779 } 00:15:47.779 ]' 00:15:47.779 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:48.038 08:15:52 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc -c nvc0n1p0 --l2p_dram_limit 60 00:15:48.298 [2024-11-17 08:15:53.082849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.298 [2024-11-17 08:15:53.083479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:48.298 [2024-11-17 08:15:53.083527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:48.298 [2024-11-17 08:15:53.083544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.298 [2024-11-17 08:15:53.083669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.298 [2024-11-17 08:15:53.083704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:48.298 [2024-11-17 08:15:53.083732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:15:48.298 [2024-11-17 08:15:53.083743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.298 [2024-11-17 08:15:53.083797] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:48.298 [2024-11-17 08:15:53.085323] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:48.298 [2024-11-17 08:15:53.085631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.298 [2024-11-17 08:15:53.085819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:48.298 [2024-11-17 08:15:53.086014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:15:48.298 [2024-11-17 08:15:53.086266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.298 [2024-11-17 08:15:53.086537] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e4234dda-9d80-46e7-8ae0-5bb783e44dd0 00:15:48.298 [2024-11-17 08:15:53.087833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.298 [2024-11-17 08:15:53.088071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:48.298 [2024-11-17 08:15:53.088243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:48.298 [2024-11-17 08:15:53.088339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.298 [2024-11-17 08:15:53.092819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.298 [2024-11-17 08:15:53.092921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:48.298 [2024-11-17 08:15:53.093003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 00:15:48.298 [2024-11-17 08:15:53.093134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.299 [2024-11-17 08:15:53.093368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.299 [2024-11-17 08:15:53.093525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:48.299 [2024-11-17 08:15:53.093801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:48.299 [2024-11-17 08:15:53.093887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.299 [2024-11-17 08:15:53.094051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.299 [2024-11-17 08:15:53.094292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:48.299 [2024-11-17 08:15:53.094387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:15:48.299 [2024-11-17 08:15:53.094487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.299 [2024-11-17 08:15:53.094600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:48.299 [2024-11-17 08:15:53.098854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.299 [2024-11-17 08:15:53.098884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:48.299 [2024-11-17 08:15:53.098901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:15:48.299 [2024-11-17 08:15:53.098915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.299 [2024-11-17 08:15:53.098969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.299 [2024-11-17 08:15:53.098985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:48.299 [2024-11-17 08:15:53.098999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:48.299 [2024-11-17 08:15:53.099010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.299 [2024-11-17 08:15:53.099059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:48.299 [2024-11-17 08:15:53.099260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:48.299 [2024-11-17 08:15:53.099287] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:48.299 [2024-11-17 08:15:53.099303] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:48.299 [2024-11-17 08:15:53.099319] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:48.299 [2024-11-17 08:15:53.099333] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:48.299 [2024-11-17 08:15:53.099374] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:48.299 [2024-11-17 08:15:53.099402] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:48.299 [2024-11-17 08:15:53.099415] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:48.299 [2024-11-17 08:15:53.099427] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:48.299 [2024-11-17 08:15:53.099442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.299 [2024-11-17 08:15:53.099459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:48.299 [2024-11-17 08:15:53.099474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:15:48.299 [2024-11-17 08:15:53.099501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.299 [2024-11-17 08:15:53.099610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.299 [2024-11-17 08:15:53.099641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:48.299 [2024-11-17 08:15:53.099656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:15:48.299 [2024-11-17 08:15:53.099682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.299 [2024-11-17 08:15:53.099828] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:48.299 [2024-11-17 08:15:53.099845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:48.299 [2024-11-17 08:15:53.099861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.299 [2024-11-17 08:15:53.099873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.299 [2024-11-17 08:15:53.099886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:48.299 [2024-11-17 08:15:53.099896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:48.299 [2024-11-17 08:15:53.099908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:48.299 [2024-11-17 08:15:53.099919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:48.299 [2024-11-17 08:15:53.099932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:48.299 [2024-11-17 08:15:53.099945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.299 [2024-11-17 08:15:53.099958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:48.299 [2024-11-17 08:15:53.099969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:48.299 [2024-11-17 08:15:53.099981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.299 [2024-11-17 08:15:53.099991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:48.299 [2024-11-17 08:15:53.100004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:48.299 [2024-11-17 08:15:53.100014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:48.299 [2024-11-17 08:15:53.100040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:48.299 [2024-11-17 08:15:53.100052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:48.299 [2024-11-17 08:15:53.100074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.299 [2024-11-17 08:15:53.100096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:48.299 [2024-11-17 08:15:53.100106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.299 [2024-11-17 08:15:53.100128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:48.299 [2024-11-17 08:15:53.100154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.299 [2024-11-17 08:15:53.100177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:48.299 [2024-11-17 08:15:53.100187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.299 [2024-11-17 08:15:53.100208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:48.299 [2024-11-17 08:15:53.100222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.299 [2024-11-17 08:15:53.100244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:48.299 [2024-11-17 08:15:53.100272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:48.299 [2024-11-17 08:15:53.100285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.299 [2024-11-17 08:15:53.100296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:48.299 [2024-11-17 08:15:53.100308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:48.299 [2024-11-17 08:15:53.100317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:48.299 [2024-11-17 08:15:53.100344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:48.299 [2024-11-17 08:15:53.100356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100366] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:48.299 [2024-11-17 08:15:53.100379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:48.299 [2024-11-17 08:15:53.100389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.299 [2024-11-17 08:15:53.100401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.299 [2024-11-17 08:15:53.100413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:48.299 [2024-11-17 08:15:53.100427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:48.299 [2024-11-17 08:15:53.100437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:48.299 [2024-11-17 08:15:53.100449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:48.299 [2024-11-17 08:15:53.100459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:48.299 [2024-11-17 08:15:53.100471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:48.299 [2024-11-17 08:15:53.100485] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:48.299 [2024-11-17 08:15:53.100500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.299 [2024-11-17 08:15:53.100512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:48.299 [2024-11-17 08:15:53.100524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:48.299 [2024-11-17 08:15:53.100535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:48.299 [2024-11-17 08:15:53.100546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:48.299 [2024-11-17 08:15:53.100556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:48.299 [2024-11-17 08:15:53.100568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:48.299 [2024-11-17 08:15:53.100578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:48.299 [2024-11-17 08:15:53.100590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:48.299 [2024-11-17 08:15:53.100600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:48.299 [2024-11-17 08:15:53.100615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:48.299 [2024-11-17 08:15:53.100625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:48.300 [2024-11-17 08:15:53.100637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:48.300 [2024-11-17 08:15:53.100647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:48.300 [2024-11-17 08:15:53.100659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:48.300 [2024-11-17 08:15:53.100669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:48.300 [2024-11-17 08:15:53.100682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.300 [2024-11-17 08:15:53.100696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:48.300 [2024-11-17 08:15:53.100708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:48.300 [2024-11-17 08:15:53.100719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:48.300 [2024-11-17 08:15:53.100732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:48.300 [2024-11-17 08:15:53.100743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.300 [2024-11-17 08:15:53.100755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:48.300 [2024-11-17 08:15:53.100766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:15:48.300 [2024-11-17 08:15:53.100778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.300 [2024-11-17 08:15:53.100850] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:48.300 [2024-11-17 08:15:53.100872] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:51.586 [2024-11-17 08:15:56.368673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.368736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:51.586 [2024-11-17 08:15:56.368777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3267.844 ms 00:15:51.586 [2024-11-17 08:15:56.368790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.396982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.397262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:51.586 [2024-11-17 08:15:56.397388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.963 ms 00:15:51.586 [2024-11-17 08:15:56.397417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.397615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.397639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:51.586 [2024-11-17 08:15:56.397667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:15:51.586 [2024-11-17 08:15:56.397682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.443175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.443444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:51.586 [2024-11-17 08:15:56.443485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.431 ms 00:15:51.586 [2024-11-17 08:15:56.443508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.443574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.443601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:51.586 [2024-11-17 08:15:56.443618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:51.586 [2024-11-17 08:15:56.443636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.444159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.444195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:51.586 [2024-11-17 08:15:56.444214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:15:51.586 [2024-11-17 08:15:56.444235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.444446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.444480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:51.586 [2024-11-17 08:15:56.444497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:15:51.586 [2024-11-17 08:15:56.444518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.463400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.463462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:51.586 [2024-11-17 08:15:56.463479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.842 ms 00:15:51.586 [2024-11-17 08:15:56.463492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.474922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:51.586 [2024-11-17 08:15:56.487512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.487589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:51.586 [2024-11-17 08:15:56.487629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.893 ms 00:15:51.586 [2024-11-17 08:15:56.487643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.581418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.581498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:51.586 [2024-11-17 08:15:56.581540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.707 ms 00:15:51.586 [2024-11-17 08:15:56.581552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.586 [2024-11-17 08:15:56.581758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.586 [2024-11-17 08:15:56.581777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:51.586 [2024-11-17 08:15:56.581794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:15:51.586 [2024-11-17 08:15:56.581805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.610014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.610055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:51.846 [2024-11-17 08:15:56.610107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.140 ms 00:15:51.846 [2024-11-17 08:15:56.610134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.636606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.636643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:51.846 [2024-11-17 08:15:56.636679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.405 ms 00:15:51.846 [2024-11-17 08:15:56.636690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.637378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.637412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:51.846 [2024-11-17 08:15:56.637430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:15:51.846 [2024-11-17 08:15:56.637443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.730962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.731220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:51.846 [2024-11-17 08:15:56.731258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.445 ms 00:15:51.846 [2024-11-17 08:15:56.731276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.758821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.758986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:51.846 [2024-11-17 08:15:56.759149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.377 ms 00:15:51.846 [2024-11-17 08:15:56.759253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.785911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.786090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:51.846 [2024-11-17 08:15:56.786222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.447 ms 00:15:51.846 [2024-11-17 08:15:56.786271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.813447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.813642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:51.846 [2024-11-17 08:15:56.813772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.018 ms 00:15:51.846 [2024-11-17 08:15:56.813871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.814020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.814176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:51.846 [2024-11-17 08:15:56.814317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:51.846 [2024-11-17 08:15:56.814370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.814572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.846 [2024-11-17 08:15:56.814638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:51.846 [2024-11-17 08:15:56.814745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:51.846 [2024-11-17 08:15:56.814850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.846 [2024-11-17 08:15:56.816183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3732.767 ms, result 0 00:15:51.846 { 00:15:51.846 "name": "ftl0", 00:15:51.846 "uuid": "e4234dda-9d80-46e7-8ae0-5bb783e44dd0" 00:15:51.846 } 00:15:51.846 08:15:56 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:51.846 08:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:15:51.846 08:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:51.846 08:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:15:51.846 08:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:51.846 08:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:51.846 08:15:56 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:52.104 08:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:52.363 [ 00:15:52.363 { 00:15:52.363 "name": "ftl0", 00:15:52.363 "aliases": [ 00:15:52.363 "e4234dda-9d80-46e7-8ae0-5bb783e44dd0" 00:15:52.363 ], 00:15:52.363 "product_name": "FTL disk", 00:15:52.363 "block_size": 4096, 00:15:52.363 "num_blocks": 20971520, 00:15:52.363 "uuid": "e4234dda-9d80-46e7-8ae0-5bb783e44dd0", 00:15:52.363 "assigned_rate_limits": { 00:15:52.363 "rw_ios_per_sec": 0, 00:15:52.363 "rw_mbytes_per_sec": 0, 00:15:52.363 "r_mbytes_per_sec": 0, 00:15:52.363 "w_mbytes_per_sec": 0 00:15:52.363 }, 00:15:52.363 "claimed": false, 00:15:52.363 "zoned": false, 00:15:52.363 "supported_io_types": { 00:15:52.363 "read": true, 00:15:52.363 "write": true, 00:15:52.363 "unmap": true, 00:15:52.363 "flush": true, 00:15:52.363 "reset": false, 00:15:52.363 "nvme_admin": false, 00:15:52.363 "nvme_io": false, 00:15:52.363 "nvme_io_md": false, 00:15:52.363 "write_zeroes": true, 00:15:52.363 "zcopy": false, 00:15:52.363 "get_zone_info": false, 00:15:52.363 "zone_management": false, 00:15:52.363 "zone_append": false, 00:15:52.363 "compare": false, 00:15:52.363 "compare_and_write": false, 00:15:52.363 "abort": false, 00:15:52.363 "seek_hole": false, 00:15:52.363 "seek_data": false, 00:15:52.363 "copy": false, 00:15:52.363 "nvme_iov_md": false 00:15:52.363 }, 00:15:52.363 "driver_specific": { 00:15:52.363 "ftl": { 00:15:52.363 "base_bdev": "4f4a63b1-11e7-48a2-a9dc-fdc4d668d3bc", 00:15:52.363 "cache": "nvc0n1p0" 00:15:52.363 } 00:15:52.363 } 00:15:52.363 } 00:15:52.363 ] 00:15:52.363 08:15:57 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:15:52.363 08:15:57 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:52.363 08:15:57 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:52.622 08:15:57 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:52.622 08:15:57 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:52.881 [2024-11-17 08:15:57.812629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.812713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:52.881 [2024-11-17 08:15:57.812742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:52.881 [2024-11-17 08:15:57.812765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.881 [2024-11-17 08:15:57.812826] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:52.881 [2024-11-17 08:15:57.816181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.816217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:52.881 [2024-11-17 08:15:57.816234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:15:52.881 [2024-11-17 08:15:57.816246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.881 [2024-11-17 08:15:57.816757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.816791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:52.881 [2024-11-17 08:15:57.816826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:15:52.881 [2024-11-17 08:15:57.816838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.881 [2024-11-17 08:15:57.820682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.820730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:52.881 [2024-11-17 08:15:57.820773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.786 ms 00:15:52.881 [2024-11-17 08:15:57.820791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.881 [2024-11-17 08:15:57.826794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.826827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:52.881 [2024-11-17 08:15:57.826860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:15:52.881 [2024-11-17 08:15:57.826872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.881 [2024-11-17 08:15:57.854241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.854290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:52.881 [2024-11-17 08:15:57.854328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.268 ms 00:15:52.881 [2024-11-17 08:15:57.854339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.881 [2024-11-17 08:15:57.873200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.873241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:52.881 [2024-11-17 08:15:57.873261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.787 ms 00:15:52.881 [2024-11-17 08:15:57.873276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.881 [2024-11-17 08:15:57.873503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.881 [2024-11-17 08:15:57.873526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:52.881 [2024-11-17 08:15:57.873542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:15:52.881 [2024-11-17 08:15:57.873554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.140 [2024-11-17 08:15:57.903111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.140 [2024-11-17 08:15:57.903154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:53.140 [2024-11-17 08:15:57.903172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.526 ms 00:15:53.140 [2024-11-17 08:15:57.903183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.140 [2024-11-17 08:15:57.930471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.141 [2024-11-17 08:15:57.930522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:53.141 [2024-11-17 08:15:57.930556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.238 ms 00:15:53.141 [2024-11-17 08:15:57.930567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.141 [2024-11-17 08:15:57.957306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.141 [2024-11-17 08:15:57.957354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:53.141 [2024-11-17 08:15:57.957391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.685 ms 00:15:53.141 [2024-11-17 08:15:57.957402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.141 [2024-11-17 08:15:57.984177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.141 [2024-11-17 08:15:57.984215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:53.141 [2024-11-17 08:15:57.984249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.634 ms 00:15:53.141 [2024-11-17 08:15:57.984261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.141 [2024-11-17 08:15:57.984317] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:53.141 [2024-11-17 08:15:57.984339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.984989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:53.141 [2024-11-17 08:15:57.985307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:53.142 [2024-11-17 08:15:57.985672] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:53.142 [2024-11-17 08:15:57.985685] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e4234dda-9d80-46e7-8ae0-5bb783e44dd0 00:15:53.142 [2024-11-17 08:15:57.985696] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:53.142 [2024-11-17 08:15:57.985709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:53.142 [2024-11-17 08:15:57.985720] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:53.142 [2024-11-17 08:15:57.985735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:53.142 [2024-11-17 08:15:57.985746] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:53.142 [2024-11-17 08:15:57.985759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:53.142 [2024-11-17 08:15:57.985769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:53.142 [2024-11-17 08:15:57.985780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:53.142 [2024-11-17 08:15:57.985790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:53.142 [2024-11-17 08:15:57.985820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.142 [2024-11-17 08:15:57.985831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:53.142 [2024-11-17 08:15:57.985845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:15:53.142 [2024-11-17 08:15:57.985857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.142 [2024-11-17 08:15:58.000640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.142 [2024-11-17 08:15:58.000676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:53.142 [2024-11-17 08:15:58.000711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.709 ms 00:15:53.142 [2024-11-17 08:15:58.000722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.142 [2024-11-17 08:15:58.001191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.142 [2024-11-17 08:15:58.001238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:53.142 [2024-11-17 08:15:58.001256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:15:53.142 [2024-11-17 08:15:58.001268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.142 [2024-11-17 08:15:58.050039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.142 [2024-11-17 08:15:58.050131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:53.142 [2024-11-17 08:15:58.050151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.142 [2024-11-17 08:15:58.050163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.142 [2024-11-17 08:15:58.050235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.142 [2024-11-17 08:15:58.050250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:53.142 [2024-11-17 08:15:58.050263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.142 [2024-11-17 08:15:58.050274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.142 [2024-11-17 08:15:58.050421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.142 [2024-11-17 08:15:58.050440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:53.142 [2024-11-17 08:15:58.050458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.142 [2024-11-17 08:15:58.050469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.142 [2024-11-17 08:15:58.050504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.142 [2024-11-17 08:15:58.050519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:53.142 [2024-11-17 08:15:58.050548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.142 [2024-11-17 08:15:58.050559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.142 [2024-11-17 08:15:58.140357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.142 [2024-11-17 08:15:58.140420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:53.142 [2024-11-17 08:15:58.140456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.142 [2024-11-17 08:15:58.140468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.219984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.402 [2024-11-17 08:15:58.220189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:53.402 [2024-11-17 08:15:58.220340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.402 [2024-11-17 08:15:58.220425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.220640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.402 [2024-11-17 08:15:58.220862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:53.402 [2024-11-17 08:15:58.220961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.402 [2024-11-17 08:15:58.221031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.221246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.402 [2024-11-17 08:15:58.221526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:53.402 [2024-11-17 08:15:58.221661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.402 [2024-11-17 08:15:58.221824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.221987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.402 [2024-11-17 08:15:58.222006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:53.402 [2024-11-17 08:15:58.222021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.402 [2024-11-17 08:15:58.222032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.222489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.402 [2024-11-17 08:15:58.222612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:53.402 [2024-11-17 08:15:58.222811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.402 [2024-11-17 08:15:58.222873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.222992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.402 [2024-11-17 08:15:58.223189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:53.402 [2024-11-17 08:15:58.223321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.402 [2024-11-17 08:15:58.223457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.223564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:53.402 [2024-11-17 08:15:58.223583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:53.402 [2024-11-17 08:15:58.223598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:53.402 [2024-11-17 08:15:58.223611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.402 [2024-11-17 08:15:58.223855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 411.216 ms, result 0 00:15:53.402 true 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 73404 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 73404 ']' 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 73404 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73404 00:15:53.402 killing process with pid 73404 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73404' 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 73404 00:15:53.402 08:15:58 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 73404 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.594 08:16:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:57.594 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:57.594 fio-3.35 00:15:57.594 Starting 1 thread 00:16:02.870 00:16:02.870 test: (groupid=0, jobs=1): err= 0: pid=73607: Sun Nov 17 08:16:07 2024 00:16:02.870 read: IOPS=945, BW=62.8MiB/s (65.8MB/s)(255MiB/4054msec) 00:16:02.870 slat (nsec): min=5183, max=39112, avg=6734.08, stdev=3190.11 00:16:02.870 clat (usec): min=331, max=938, avg=473.94, stdev=52.41 00:16:02.870 lat (usec): min=337, max=951, avg=480.67, stdev=53.27 00:16:02.870 clat percentiles (usec): 00:16:02.870 | 1.00th=[ 388], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 437], 00:16:02.870 | 30.00th=[ 445], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 474], 00:16:02.870 | 70.00th=[ 486], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 578], 00:16:02.870 | 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 791], 99.95th=[ 930], 00:16:02.870 | 99.99th=[ 938] 00:16:02.870 write: IOPS=952, BW=63.2MiB/s (66.3MB/s)(256MiB/4049msec); 0 zone resets 00:16:02.870 slat (nsec): min=17002, max=80586, avg=22289.19, stdev=5650.47 00:16:02.870 clat (usec): min=392, max=1053, avg=537.94, stdev=65.99 00:16:02.870 lat (usec): min=411, max=1080, avg=560.23, stdev=66.95 00:16:02.870 clat percentiles (usec): 00:16:02.870 | 1.00th=[ 429], 5.00th=[ 453], 10.00th=[ 478], 20.00th=[ 498], 00:16:02.870 | 30.00th=[ 510], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[ 537], 00:16:02.870 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 603], 95.00th=[ 627], 00:16:02.870 | 99.00th=[ 857], 99.50th=[ 898], 99.90th=[ 971], 99.95th=[ 996], 00:16:02.870 | 99.99th=[ 1057] 00:16:02.870 bw ( KiB/s): min=60384, max=66232, per=100.00%, avg=64855.00, stdev=2072.05, samples=8 00:16:02.870 iops : min= 888, max= 974, avg=953.75, stdev=30.47, samples=8 00:16:02.870 lat (usec) : 500=48.24%, 750=50.80%, 1000=0.95% 00:16:02.870 lat (msec) : 2=0.01% 00:16:02.870 cpu : usr=99.24%, sys=0.10%, ctx=8, majf=0, minf=1169 00:16:02.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.870 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.870 00:16:02.870 Run status group 0 (all jobs): 00:16:02.870 READ: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=255MiB (267MB), run=4054-4054msec 00:16:02.870 WRITE: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=256MiB (269MB), run=4049-4049msec 00:16:04.250 ----------------------------------------------------- 00:16:04.250 Suppressions used: 00:16:04.250 count bytes template 00:16:04.250 1 5 /usr/src/fio/parse.c 00:16:04.250 1 8 libtcmalloc_minimal.so 00:16:04.250 1 904 libcrypto.so 00:16:04.250 ----------------------------------------------------- 00:16:04.250 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:04.250 08:16:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:04.509 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:04.509 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:04.509 fio-3.35 00:16:04.509 Starting 2 threads 00:16:36.591 00:16:36.591 first_half: (groupid=0, jobs=1): err= 0: pid=73713: Sun Nov 17 08:16:39 2024 00:16:36.591 read: IOPS=2235, BW=8943KiB/s (9157kB/s)(256MiB/29287msec) 00:16:36.591 slat (nsec): min=4168, max=56599, avg=7912.42, stdev=3610.52 00:16:36.591 clat (usec): min=943, max=303026, avg=49135.14, stdev=27494.41 00:16:36.591 lat (usec): min=965, max=303034, avg=49143.05, stdev=27494.63 00:16:36.591 clat percentiles (msec): 00:16:36.591 | 1.00th=[ 14], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:16:36.591 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 43], 00:16:36.591 | 70.00th=[ 45], 80.00th=[ 50], 90.00th=[ 52], 95.00th=[ 89], 00:16:36.591 | 99.00th=[ 199], 99.50th=[ 213], 99.90th=[ 234], 99.95th=[ 266], 00:16:36.591 | 99.99th=[ 296] 00:16:36.591 write: IOPS=2241, BW=8966KiB/s (9182kB/s)(256MiB/29236msec); 0 zone resets 00:16:36.591 slat (usec): min=4, max=1126, avg= 8.82, stdev= 7.46 00:16:36.591 clat (usec): min=463, max=54031, avg=8073.58, stdev=8287.59 00:16:36.591 lat (usec): min=473, max=54040, avg=8082.40, stdev=8287.83 00:16:36.591 clat percentiles (usec): 00:16:36.591 | 1.00th=[ 1156], 5.00th=[ 1565], 10.00th=[ 1876], 20.00th=[ 3359], 00:16:36.591 | 30.00th=[ 4359], 40.00th=[ 5473], 50.00th=[ 6128], 60.00th=[ 7046], 00:16:36.591 | 70.00th=[ 7570], 80.00th=[ 8979], 90.00th=[14877], 95.00th=[22938], 00:16:36.591 | 99.00th=[46400], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:16:36.591 | 99.99th=[53216] 00:16:36.591 bw ( KiB/s): min= 56, max=46112, per=100.00%, avg=22635.22, stdev=13976.73, samples=23 00:16:36.591 iops : min= 14, max=11528, avg=5658.78, stdev=3494.17, samples=23 00:16:36.591 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.20% 00:16:36.591 lat (msec) : 2=5.48%, 4=7.24%, 10=28.37%, 20=7.46%, 50=41.47% 00:16:36.591 lat (msec) : 100=7.46%, 250=2.23%, 500=0.03% 00:16:36.591 cpu : usr=98.60%, sys=0.45%, ctx=60, majf=0, minf=5532 00:16:36.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:36.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.591 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.591 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.591 second_half: (groupid=0, jobs=1): err= 0: pid=73714: Sun Nov 17 08:16:39 2024 00:16:36.591 read: IOPS=2256, BW=9026KiB/s (9243kB/s)(256MiB/29021msec) 00:16:36.591 slat (nsec): min=3986, max=53928, avg=7683.57, stdev=3500.93 00:16:36.591 clat (msec): min=11, max=232, avg=49.47, stdev=24.08 00:16:36.591 lat (msec): min=11, max=232, avg=49.48, stdev=24.08 00:16:36.591 clat percentiles (msec): 00:16:36.591 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:16:36.591 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:16:36.591 | 70.00th=[ 45], 80.00th=[ 51], 90.00th=[ 53], 95.00th=[ 83], 00:16:36.591 | 99.00th=[ 182], 99.50th=[ 194], 99.90th=[ 222], 99.95th=[ 228], 00:16:36.591 | 99.99th=[ 232] 00:16:36.591 write: IOPS=2271, BW=9085KiB/s (9303kB/s)(256MiB/28856msec); 0 zone resets 00:16:36.591 slat (usec): min=4, max=500, avg= 8.39, stdev= 6.04 00:16:36.591 clat (usec): min=518, max=45857, avg=7220.90, stdev=4430.73 00:16:36.591 lat (usec): min=531, max=45865, avg=7229.29, stdev=4430.93 00:16:36.591 clat percentiles (usec): 00:16:36.591 | 1.00th=[ 1319], 5.00th=[ 2212], 10.00th=[ 3097], 20.00th=[ 4015], 00:16:36.591 | 30.00th=[ 5080], 40.00th=[ 5669], 50.00th=[ 6521], 60.00th=[ 7046], 00:16:36.591 | 70.00th=[ 7635], 80.00th=[ 8848], 90.00th=[13304], 95.00th=[15270], 00:16:36.591 | 99.00th=[23200], 99.50th=[32900], 99.90th=[40109], 99.95th=[43779], 00:16:36.591 | 99.99th=[44827] 00:16:36.591 bw ( KiB/s): min= 872, max=41864, per=100.00%, avg=22705.04, stdev=14231.69, samples=23 00:16:36.591 iops : min= 218, max=10466, avg=5676.26, stdev=3557.92, samples=23 00:16:36.591 lat (usec) : 750=0.04%, 1000=0.17% 00:16:36.591 lat (msec) : 2=1.71%, 4=7.93%, 10=31.72%, 20=7.90%, 50=40.29% 00:16:36.591 lat (msec) : 100=8.14%, 250=2.10% 00:16:36.591 cpu : usr=98.80%, sys=0.45%, ctx=47, majf=0, minf=5581 00:16:36.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:36.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.591 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.591 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.591 00:16:36.591 Run status group 0 (all jobs): 00:16:36.591 READ: bw=17.5MiB/s (18.3MB/s), 8943KiB/s-9026KiB/s (9157kB/s-9243kB/s), io=512MiB (536MB), run=29021-29287msec 00:16:36.591 WRITE: bw=17.5MiB/s (18.4MB/s), 8966KiB/s-9085KiB/s (9182kB/s-9303kB/s), io=512MiB (537MB), run=28856-29236msec 00:16:36.850 ----------------------------------------------------- 00:16:36.850 Suppressions used: 00:16:36.850 count bytes template 00:16:36.850 2 10 /usr/src/fio/parse.c 00:16:36.850 2 192 /usr/src/fio/iolog.c 00:16:36.850 1 8 libtcmalloc_minimal.so 00:16:36.850 1 904 libcrypto.so 00:16:36.850 ----------------------------------------------------- 00:16:36.850 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.850 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:36.851 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:36.851 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.851 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.851 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:36.851 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:37.110 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:37.110 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:37.110 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:37.110 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:37.110 08:16:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:37.110 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:37.110 fio-3.35 00:16:37.110 Starting 1 thread 00:16:59.044 00:16:59.044 test: (groupid=0, jobs=1): err= 0: pid=74086: Sun Nov 17 08:17:00 2024 00:16:59.044 read: IOPS=5767, BW=22.5MiB/s (23.6MB/s)(255MiB/11305msec) 00:16:59.044 slat (nsec): min=4095, max=57306, avg=6853.03, stdev=3639.11 00:16:59.044 clat (usec): min=792, max=44267, avg=22181.36, stdev=1229.32 00:16:59.044 lat (usec): min=801, max=44277, avg=22188.21, stdev=1229.42 00:16:59.044 clat percentiles (usec): 00:16:59.044 | 1.00th=[20841], 5.00th=[21103], 10.00th=[21365], 20.00th=[21627], 00:16:59.044 | 30.00th=[21890], 40.00th=[21890], 50.00th=[22152], 60.00th=[22152], 00:16:59.044 | 70.00th=[22414], 80.00th=[22414], 90.00th=[22938], 95.00th=[23725], 00:16:59.044 | 99.00th=[26870], 99.50th=[27395], 99.90th=[33162], 99.95th=[39060], 00:16:59.044 | 99.99th=[43779] 00:16:59.044 write: IOPS=11.3k, BW=44.3MiB/s (46.4MB/s)(256MiB/5783msec); 0 zone resets 00:16:59.044 slat (usec): min=4, max=441, avg= 9.65, stdev= 6.80 00:16:59.044 clat (usec): min=729, max=69357, avg=11231.43, stdev=14031.51 00:16:59.044 lat (usec): min=736, max=69369, avg=11241.08, stdev=14031.61 00:16:59.044 clat percentiles (usec): 00:16:59.044 | 1.00th=[ 979], 5.00th=[ 1205], 10.00th=[ 1319], 20.00th=[ 1483], 00:16:59.044 | 30.00th=[ 1680], 40.00th=[ 2114], 50.00th=[ 7177], 60.00th=[ 8455], 00:16:59.044 | 70.00th=[ 9896], 80.00th=[12387], 90.00th=[41681], 95.00th=[43254], 00:16:59.044 | 99.00th=[45876], 99.50th=[47449], 99.90th=[53216], 99.95th=[58459], 00:16:59.044 | 99.99th=[67634] 00:16:59.044 bw ( KiB/s): min=21024, max=69736, per=96.37%, avg=43683.17, stdev=11747.19, samples=12 00:16:59.044 iops : min= 5256, max=17434, avg=10920.75, stdev=2936.79, samples=12 00:16:59.044 lat (usec) : 750=0.01%, 1000=0.62% 00:16:59.044 lat (msec) : 2=18.91%, 4=1.37%, 10=14.56%, 20=6.70%, 50=57.74% 00:16:59.044 lat (msec) : 100=0.10% 00:16:59.044 cpu : usr=97.97%, sys=1.10%, ctx=28, majf=0, minf=5565 00:16:59.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:59.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.044 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.044 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.045 00:16:59.045 Run status group 0 (all jobs): 00:16:59.045 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=255MiB (267MB), run=11305-11305msec 00:16:59.045 WRITE: bw=44.3MiB/s (46.4MB/s), 44.3MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=256MiB (268MB), run=5783-5783msec 00:16:59.045 ----------------------------------------------------- 00:16:59.045 Suppressions used: 00:16:59.045 count bytes template 00:16:59.045 1 5 /usr/src/fio/parse.c 00:16:59.045 2 192 /usr/src/fio/iolog.c 00:16:59.045 1 8 libtcmalloc_minimal.so 00:16:59.045 1 904 libcrypto.so 00:16:59.045 ----------------------------------------------------- 00:16:59.045 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:59.045 Remove shared memory files 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57745 /dev/shm/spdk_tgt_trace.pid72322 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:59.045 ************************************ 00:16:59.045 END TEST ftl_fio_basic 00:16:59.045 ************************************ 00:16:59.045 00:16:59.045 real 1m13.492s 00:16:59.045 user 2m42.531s 00:16:59.045 sys 0m3.834s 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.045 08:17:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:59.045 08:17:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:59.045 08:17:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:59.045 08:17:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.045 08:17:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:59.045 ************************************ 00:16:59.045 START TEST ftl_bdevperf 00:16:59.045 ************************************ 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:59.045 * Looking for test storage... 00:16:59.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.045 --rc genhtml_branch_coverage=1 00:16:59.045 --rc genhtml_function_coverage=1 00:16:59.045 --rc genhtml_legend=1 00:16:59.045 --rc geninfo_all_blocks=1 00:16:59.045 --rc geninfo_unexecuted_blocks=1 00:16:59.045 00:16:59.045 ' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.045 --rc genhtml_branch_coverage=1 00:16:59.045 --rc genhtml_function_coverage=1 00:16:59.045 --rc genhtml_legend=1 00:16:59.045 --rc geninfo_all_blocks=1 00:16:59.045 --rc geninfo_unexecuted_blocks=1 00:16:59.045 00:16:59.045 ' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.045 --rc genhtml_branch_coverage=1 00:16:59.045 --rc genhtml_function_coverage=1 00:16:59.045 --rc genhtml_legend=1 00:16:59.045 --rc geninfo_all_blocks=1 00:16:59.045 --rc geninfo_unexecuted_blocks=1 00:16:59.045 00:16:59.045 ' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.045 --rc genhtml_branch_coverage=1 00:16:59.045 --rc genhtml_function_coverage=1 00:16:59.045 --rc genhtml_legend=1 00:16:59.045 --rc geninfo_all_blocks=1 00:16:59.045 --rc geninfo_unexecuted_blocks=1 00:16:59.045 00:16:59.045 ' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:59.045 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=74360 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 74360 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 74360 ']' 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.046 08:17:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:59.046 [2024-11-17 08:17:02.307984] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:59.046 [2024-11-17 08:17:02.308576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74360 ] 00:16:59.046 [2024-11-17 08:17:02.473075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.046 [2024-11-17 08:17:02.554685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:59.046 { 00:16:59.046 "name": "nvme0n1", 00:16:59.046 "aliases": [ 00:16:59.046 "b693d526-ef1d-4c50-bf47-08bfe0b9b569" 00:16:59.046 ], 00:16:59.046 "product_name": "NVMe disk", 00:16:59.046 "block_size": 4096, 00:16:59.046 "num_blocks": 1310720, 00:16:59.046 "uuid": "b693d526-ef1d-4c50-bf47-08bfe0b9b569", 00:16:59.046 "numa_id": -1, 00:16:59.046 "assigned_rate_limits": { 00:16:59.046 "rw_ios_per_sec": 0, 00:16:59.046 "rw_mbytes_per_sec": 0, 00:16:59.046 "r_mbytes_per_sec": 0, 00:16:59.046 "w_mbytes_per_sec": 0 00:16:59.046 }, 00:16:59.046 "claimed": true, 00:16:59.046 "claim_type": "read_many_write_one", 00:16:59.046 "zoned": false, 00:16:59.046 "supported_io_types": { 00:16:59.046 "read": true, 00:16:59.046 "write": true, 00:16:59.046 "unmap": true, 00:16:59.046 "flush": true, 00:16:59.046 "reset": true, 00:16:59.046 "nvme_admin": true, 00:16:59.046 "nvme_io": true, 00:16:59.046 "nvme_io_md": false, 00:16:59.046 "write_zeroes": true, 00:16:59.046 "zcopy": false, 00:16:59.046 "get_zone_info": false, 00:16:59.046 "zone_management": false, 00:16:59.046 "zone_append": false, 00:16:59.046 "compare": true, 00:16:59.046 "compare_and_write": false, 00:16:59.046 "abort": true, 00:16:59.046 "seek_hole": false, 00:16:59.046 "seek_data": false, 00:16:59.046 "copy": true, 00:16:59.046 "nvme_iov_md": false 00:16:59.046 }, 00:16:59.046 "driver_specific": { 00:16:59.046 "nvme": [ 00:16:59.046 { 00:16:59.046 "pci_address": "0000:00:11.0", 00:16:59.046 "trid": { 00:16:59.046 "trtype": "PCIe", 00:16:59.046 "traddr": "0000:00:11.0" 00:16:59.046 }, 00:16:59.046 "ctrlr_data": { 00:16:59.046 "cntlid": 0, 00:16:59.046 "vendor_id": "0x1b36", 00:16:59.046 "model_number": "QEMU NVMe Ctrl", 00:16:59.046 "serial_number": "12341", 00:16:59.046 "firmware_revision": "8.0.0", 00:16:59.046 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:59.046 "oacs": { 00:16:59.046 "security": 0, 00:16:59.046 "format": 1, 00:16:59.046 "firmware": 0, 00:16:59.046 "ns_manage": 1 00:16:59.046 }, 00:16:59.046 "multi_ctrlr": false, 00:16:59.046 "ana_reporting": false 00:16:59.046 }, 00:16:59.046 "vs": { 00:16:59.046 "nvme_version": "1.4" 00:16:59.046 }, 00:16:59.046 "ns_data": { 00:16:59.046 "id": 1, 00:16:59.046 "can_share": false 00:16:59.046 } 00:16:59.046 } 00:16:59.046 ], 00:16:59.046 "mp_policy": "active_passive" 00:16:59.046 } 00:16:59.046 } 00:16:59.046 ]' 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:59.046 08:17:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:59.305 08:17:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=004ab635-3d9b-4d76-9d3f-b3ea38eaa024 00:16:59.305 08:17:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:59.305 08:17:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 004ab635-3d9b-4d76-9d3f-b3ea38eaa024 00:16:59.563 08:17:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:59.822 08:17:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=90b1f2f8-0520-4ea5-b8bb-ab9849a0a581 00:16:59.822 08:17:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 90b1f2f8-0520-4ea5-b8bb-ab9849a0a581 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:00.091 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:00.364 { 00:17:00.364 "name": "51a2b97b-0cfd-4e7e-8b37-16216819a86d", 00:17:00.364 "aliases": [ 00:17:00.364 "lvs/nvme0n1p0" 00:17:00.364 ], 00:17:00.364 "product_name": "Logical Volume", 00:17:00.364 "block_size": 4096, 00:17:00.364 "num_blocks": 26476544, 00:17:00.364 "uuid": "51a2b97b-0cfd-4e7e-8b37-16216819a86d", 00:17:00.364 "assigned_rate_limits": { 00:17:00.364 "rw_ios_per_sec": 0, 00:17:00.364 "rw_mbytes_per_sec": 0, 00:17:00.364 "r_mbytes_per_sec": 0, 00:17:00.364 "w_mbytes_per_sec": 0 00:17:00.364 }, 00:17:00.364 "claimed": false, 00:17:00.364 "zoned": false, 00:17:00.364 "supported_io_types": { 00:17:00.364 "read": true, 00:17:00.364 "write": true, 00:17:00.364 "unmap": true, 00:17:00.364 "flush": false, 00:17:00.364 "reset": true, 00:17:00.364 "nvme_admin": false, 00:17:00.364 "nvme_io": false, 00:17:00.364 "nvme_io_md": false, 00:17:00.364 "write_zeroes": true, 00:17:00.364 "zcopy": false, 00:17:00.364 "get_zone_info": false, 00:17:00.364 "zone_management": false, 00:17:00.364 "zone_append": false, 00:17:00.364 "compare": false, 00:17:00.364 "compare_and_write": false, 00:17:00.364 "abort": false, 00:17:00.364 "seek_hole": true, 00:17:00.364 "seek_data": true, 00:17:00.364 "copy": false, 00:17:00.364 "nvme_iov_md": false 00:17:00.364 }, 00:17:00.364 "driver_specific": { 00:17:00.364 "lvol": { 00:17:00.364 "lvol_store_uuid": "90b1f2f8-0520-4ea5-b8bb-ab9849a0a581", 00:17:00.364 "base_bdev": "nvme0n1", 00:17:00.364 "thin_provision": true, 00:17:00.364 "num_allocated_clusters": 0, 00:17:00.364 "snapshot": false, 00:17:00.364 "clone": false, 00:17:00.364 "esnap_clone": false 00:17:00.364 } 00:17:00.364 } 00:17:00.364 } 00:17:00.364 ]' 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:00.364 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:00.933 { 00:17:00.933 "name": "51a2b97b-0cfd-4e7e-8b37-16216819a86d", 00:17:00.933 "aliases": [ 00:17:00.933 "lvs/nvme0n1p0" 00:17:00.933 ], 00:17:00.933 "product_name": "Logical Volume", 00:17:00.933 "block_size": 4096, 00:17:00.933 "num_blocks": 26476544, 00:17:00.933 "uuid": "51a2b97b-0cfd-4e7e-8b37-16216819a86d", 00:17:00.933 "assigned_rate_limits": { 00:17:00.933 "rw_ios_per_sec": 0, 00:17:00.933 "rw_mbytes_per_sec": 0, 00:17:00.933 "r_mbytes_per_sec": 0, 00:17:00.933 "w_mbytes_per_sec": 0 00:17:00.933 }, 00:17:00.933 "claimed": false, 00:17:00.933 "zoned": false, 00:17:00.933 "supported_io_types": { 00:17:00.933 "read": true, 00:17:00.933 "write": true, 00:17:00.933 "unmap": true, 00:17:00.933 "flush": false, 00:17:00.933 "reset": true, 00:17:00.933 "nvme_admin": false, 00:17:00.933 "nvme_io": false, 00:17:00.933 "nvme_io_md": false, 00:17:00.933 "write_zeroes": true, 00:17:00.933 "zcopy": false, 00:17:00.933 "get_zone_info": false, 00:17:00.933 "zone_management": false, 00:17:00.933 "zone_append": false, 00:17:00.933 "compare": false, 00:17:00.933 "compare_and_write": false, 00:17:00.933 "abort": false, 00:17:00.933 "seek_hole": true, 00:17:00.933 "seek_data": true, 00:17:00.933 "copy": false, 00:17:00.933 "nvme_iov_md": false 00:17:00.933 }, 00:17:00.933 "driver_specific": { 00:17:00.933 "lvol": { 00:17:00.933 "lvol_store_uuid": "90b1f2f8-0520-4ea5-b8bb-ab9849a0a581", 00:17:00.933 "base_bdev": "nvme0n1", 00:17:00.933 "thin_provision": true, 00:17:00.933 "num_allocated_clusters": 0, 00:17:00.933 "snapshot": false, 00:17:00.933 "clone": false, 00:17:00.933 "esnap_clone": false 00:17:00.933 } 00:17:00.933 } 00:17:00.933 } 00:17:00.933 ]' 00:17:00.933 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:01.192 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:01.192 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:01.192 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:01.192 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:01.192 08:17:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:01.192 08:17:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:01.192 08:17:06 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:01.451 08:17:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:01.451 08:17:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:01.451 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:01.451 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:01.451 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:01.451 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:01.451 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51a2b97b-0cfd-4e7e-8b37-16216819a86d 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:01.710 { 00:17:01.710 "name": "51a2b97b-0cfd-4e7e-8b37-16216819a86d", 00:17:01.710 "aliases": [ 00:17:01.710 "lvs/nvme0n1p0" 00:17:01.710 ], 00:17:01.710 "product_name": "Logical Volume", 00:17:01.710 "block_size": 4096, 00:17:01.710 "num_blocks": 26476544, 00:17:01.710 "uuid": "51a2b97b-0cfd-4e7e-8b37-16216819a86d", 00:17:01.710 "assigned_rate_limits": { 00:17:01.710 "rw_ios_per_sec": 0, 00:17:01.710 "rw_mbytes_per_sec": 0, 00:17:01.710 "r_mbytes_per_sec": 0, 00:17:01.710 "w_mbytes_per_sec": 0 00:17:01.710 }, 00:17:01.710 "claimed": false, 00:17:01.710 "zoned": false, 00:17:01.710 "supported_io_types": { 00:17:01.710 "read": true, 00:17:01.710 "write": true, 00:17:01.710 "unmap": true, 00:17:01.710 "flush": false, 00:17:01.710 "reset": true, 00:17:01.710 "nvme_admin": false, 00:17:01.710 "nvme_io": false, 00:17:01.710 "nvme_io_md": false, 00:17:01.710 "write_zeroes": true, 00:17:01.710 "zcopy": false, 00:17:01.710 "get_zone_info": false, 00:17:01.710 "zone_management": false, 00:17:01.710 "zone_append": false, 00:17:01.710 "compare": false, 00:17:01.710 "compare_and_write": false, 00:17:01.710 "abort": false, 00:17:01.710 "seek_hole": true, 00:17:01.710 "seek_data": true, 00:17:01.710 "copy": false, 00:17:01.710 "nvme_iov_md": false 00:17:01.710 }, 00:17:01.710 "driver_specific": { 00:17:01.710 "lvol": { 00:17:01.710 "lvol_store_uuid": "90b1f2f8-0520-4ea5-b8bb-ab9849a0a581", 00:17:01.710 "base_bdev": "nvme0n1", 00:17:01.710 "thin_provision": true, 00:17:01.710 "num_allocated_clusters": 0, 00:17:01.710 "snapshot": false, 00:17:01.710 "clone": false, 00:17:01.710 "esnap_clone": false 00:17:01.710 } 00:17:01.710 } 00:17:01.710 } 00:17:01.710 ]' 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:01.710 08:17:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 51a2b97b-0cfd-4e7e-8b37-16216819a86d -c nvc0n1p0 --l2p_dram_limit 20 00:17:01.970 [2024-11-17 08:17:06.802236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.802311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.970 [2024-11-17 08:17:06.802331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:01.970 [2024-11-17 08:17:06.802345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.802430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.802451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.970 [2024-11-17 08:17:06.802478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:01.970 [2024-11-17 08:17:06.802489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.802514] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.970 [2024-11-17 08:17:06.803493] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.970 [2024-11-17 08:17:06.803677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.803719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.970 [2024-11-17 08:17:06.803748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:17:01.970 [2024-11-17 08:17:06.803761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.803881] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c99d9177-a6d2-4fa2-9f79-68de91ad7e5c 00:17:01.970 [2024-11-17 08:17:06.804983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.805051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:01.970 [2024-11-17 08:17:06.805079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:01.970 [2024-11-17 08:17:06.805137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.809649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.809684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.970 [2024-11-17 08:17:06.809715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.459 ms 00:17:01.970 [2024-11-17 08:17:06.809725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.809824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.809841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.970 [2024-11-17 08:17:06.809857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:01.970 [2024-11-17 08:17:06.809867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.809933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.809948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.970 [2024-11-17 08:17:06.809960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:01.970 [2024-11-17 08:17:06.809970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.809997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:01.970 [2024-11-17 08:17:06.814238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.814293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.970 [2024-11-17 08:17:06.814308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.251 ms 00:17:01.970 [2024-11-17 08:17:06.814322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.814361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.814377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.970 [2024-11-17 08:17:06.814389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:01.970 [2024-11-17 08:17:06.814401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.814478] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:01.970 [2024-11-17 08:17:06.814634] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:01.970 [2024-11-17 08:17:06.814649] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.970 [2024-11-17 08:17:06.814665] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:01.970 [2024-11-17 08:17:06.814677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.970 [2024-11-17 08:17:06.814690] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.970 [2024-11-17 08:17:06.814700] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:01.970 [2024-11-17 08:17:06.814711] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.970 [2024-11-17 08:17:06.814720] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:01.970 [2024-11-17 08:17:06.814730] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:01.970 [2024-11-17 08:17:06.814740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.814754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.970 [2024-11-17 08:17:06.814764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:17:01.970 [2024-11-17 08:17:06.814775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.814849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.970 [2024-11-17 08:17:06.814867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.970 [2024-11-17 08:17:06.814877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:01.970 [2024-11-17 08:17:06.814890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.970 [2024-11-17 08:17:06.814972] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.970 [2024-11-17 08:17:06.814988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.970 [2024-11-17 08:17:06.815001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.970 [2024-11-17 08:17:06.815032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.970 [2024-11-17 08:17:06.815061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.970 [2024-11-17 08:17:06.815080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.970 [2024-11-17 08:17:06.815123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:01.970 [2024-11-17 08:17:06.815133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.970 [2024-11-17 08:17:06.815156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.970 [2024-11-17 08:17:06.815165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:01.970 [2024-11-17 08:17:06.815225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.970 [2024-11-17 08:17:06.815248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.970 [2024-11-17 08:17:06.815280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.970 [2024-11-17 08:17:06.815312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.970 [2024-11-17 08:17:06.815340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.970 [2024-11-17 08:17:06.815417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.970 [2024-11-17 08:17:06.815468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.970 [2024-11-17 08:17:06.815491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.970 [2024-11-17 08:17:06.815504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:01.970 [2024-11-17 08:17:06.815514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.970 [2024-11-17 08:17:06.815527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:01.970 [2024-11-17 08:17:06.815538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:01.970 [2024-11-17 08:17:06.815550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:01.970 [2024-11-17 08:17:06.815574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:01.970 [2024-11-17 08:17:06.815587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815600] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.970 [2024-11-17 08:17:06.815612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.970 [2024-11-17 08:17:06.815626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.970 [2024-11-17 08:17:06.815684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.970 [2024-11-17 08:17:06.815695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.970 [2024-11-17 08:17:06.815708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.970 [2024-11-17 08:17:06.815719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.970 [2024-11-17 08:17:06.815730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.970 [2024-11-17 08:17:06.815741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.970 [2024-11-17 08:17:06.815770] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.970 [2024-11-17 08:17:06.815783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.970 [2024-11-17 08:17:06.815797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:01.971 [2024-11-17 08:17:06.815808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:01.971 [2024-11-17 08:17:06.815820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:01.971 [2024-11-17 08:17:06.815830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:01.971 [2024-11-17 08:17:06.815843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:01.971 [2024-11-17 08:17:06.815853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:01.971 [2024-11-17 08:17:06.815866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:01.971 [2024-11-17 08:17:06.815876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:01.971 [2024-11-17 08:17:06.815890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:01.971 [2024-11-17 08:17:06.815901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:01.971 [2024-11-17 08:17:06.815913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:01.971 [2024-11-17 08:17:06.815924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:01.971 [2024-11-17 08:17:06.815936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:01.971 [2024-11-17 08:17:06.815946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:01.971 [2024-11-17 08:17:06.815958] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.971 [2024-11-17 08:17:06.815970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.971 [2024-11-17 08:17:06.815985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.971 [2024-11-17 08:17:06.815996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.971 [2024-11-17 08:17:06.816008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.971 [2024-11-17 08:17:06.816020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.971 [2024-11-17 08:17:06.816034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.971 [2024-11-17 08:17:06.816048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.971 [2024-11-17 08:17:06.816061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:17:01.971 [2024-11-17 08:17:06.816072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.971 [2024-11-17 08:17:06.816133] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:01.971 [2024-11-17 08:17:06.816149] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:04.500 [2024-11-17 08:17:09.277271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.500 [2024-11-17 08:17:09.277334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:04.500 [2024-11-17 08:17:09.277377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2461.149 ms 00:17:04.500 [2024-11-17 08:17:09.277388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.500 [2024-11-17 08:17:09.304238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.500 [2024-11-17 08:17:09.304290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.500 [2024-11-17 08:17:09.304327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.616 ms 00:17:04.500 [2024-11-17 08:17:09.304338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.500 [2024-11-17 08:17:09.304491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.500 [2024-11-17 08:17:09.304509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.500 [2024-11-17 08:17:09.304525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:04.500 [2024-11-17 08:17:09.304535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.500 [2024-11-17 08:17:09.352791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.500 [2024-11-17 08:17:09.352838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.500 [2024-11-17 08:17:09.352873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.207 ms 00:17:04.500 [2024-11-17 08:17:09.352900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.500 [2024-11-17 08:17:09.352949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.500 [2024-11-17 08:17:09.352982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.501 [2024-11-17 08:17:09.352996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:04.501 [2024-11-17 08:17:09.353006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.353482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.353524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.501 [2024-11-17 08:17:09.353570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:04.501 [2024-11-17 08:17:09.353582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.353765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.353790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.501 [2024-11-17 08:17:09.353807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:04.501 [2024-11-17 08:17:09.353818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.368473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.368685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.501 [2024-11-17 08:17:09.368815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.631 ms 00:17:04.501 [2024-11-17 08:17:09.368866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.380837] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:04.501 [2024-11-17 08:17:09.385590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.385641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:04.501 [2024-11-17 08:17:09.385657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.467 ms 00:17:04.501 [2024-11-17 08:17:09.385668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.447238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.447566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:04.501 [2024-11-17 08:17:09.447596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.537 ms 00:17:04.501 [2024-11-17 08:17:09.447612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.447828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.447853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:04.501 [2024-11-17 08:17:09.447881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:04.501 [2024-11-17 08:17:09.447905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.472934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.472976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:04.501 [2024-11-17 08:17:09.472993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.968 ms 00:17:04.501 [2024-11-17 08:17:09.473004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.497104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.497277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:04.501 [2024-11-17 08:17:09.497304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.060 ms 00:17:04.501 [2024-11-17 08:17:09.497317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.501 [2024-11-17 08:17:09.497980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.501 [2024-11-17 08:17:09.498009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:04.501 [2024-11-17 08:17:09.498022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:17:04.501 [2024-11-17 08:17:09.498033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.759 [2024-11-17 08:17:09.570593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.759 [2024-11-17 08:17:09.570648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:04.759 [2024-11-17 08:17:09.570664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.519 ms 00:17:04.759 [2024-11-17 08:17:09.570676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.759 [2024-11-17 08:17:09.596795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.759 [2024-11-17 08:17:09.596840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:04.759 [2024-11-17 08:17:09.596856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.022 ms 00:17:04.759 [2024-11-17 08:17:09.596871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.759 [2024-11-17 08:17:09.621395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.759 [2024-11-17 08:17:09.621437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:04.759 [2024-11-17 08:17:09.621452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.484 ms 00:17:04.759 [2024-11-17 08:17:09.621463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.759 [2024-11-17 08:17:09.646323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.759 [2024-11-17 08:17:09.646381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:04.759 [2024-11-17 08:17:09.646398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.822 ms 00:17:04.759 [2024-11-17 08:17:09.646410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.759 [2024-11-17 08:17:09.646455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.759 [2024-11-17 08:17:09.646491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:04.759 [2024-11-17 08:17:09.646502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:04.759 [2024-11-17 08:17:09.646513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.759 [2024-11-17 08:17:09.646600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.759 [2024-11-17 08:17:09.646618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:04.759 [2024-11-17 08:17:09.646629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:04.759 [2024-11-17 08:17:09.646639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.759 [2024-11-17 08:17:09.647885] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2845.012 ms, result 0 00:17:04.759 { 00:17:04.759 "name": "ftl0", 00:17:04.759 "uuid": "c99d9177-a6d2-4fa2-9f79-68de91ad7e5c" 00:17:04.759 } 00:17:04.759 08:17:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:04.759 08:17:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:04.759 08:17:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:05.017 08:17:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:05.276 [2024-11-17 08:17:10.132047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:05.276 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:05.276 Zero copy mechanism will not be used. 00:17:05.276 Running I/O for 4 seconds... 00:17:07.148 1658.00 IOPS, 110.10 MiB/s [2024-11-17T08:17:13.538Z] 1664.00 IOPS, 110.50 MiB/s [2024-11-17T08:17:14.475Z] 1671.33 IOPS, 110.99 MiB/s [2024-11-17T08:17:14.475Z] 1665.25 IOPS, 110.58 MiB/s 00:17:09.463 Latency(us) 00:17:09.463 [2024-11-17T08:17:14.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.463 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:09.463 ftl0 : 4.00 1664.76 110.55 0.00 0.00 632.30 256.93 2144.81 00:17:09.463 [2024-11-17T08:17:14.475Z] =================================================================================================================== 00:17:09.463 [2024-11-17T08:17:14.475Z] Total : 1664.76 110.55 0.00 0.00 632.30 256.93 2144.81 00:17:09.463 [2024-11-17 08:17:14.142421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:09.463 { 00:17:09.463 "results": [ 00:17:09.463 { 00:17:09.463 "job": "ftl0", 00:17:09.463 "core_mask": "0x1", 00:17:09.463 "workload": "randwrite", 00:17:09.463 "status": "finished", 00:17:09.463 "queue_depth": 1, 00:17:09.463 "io_size": 69632, 00:17:09.463 "runtime": 4.001771, 00:17:09.463 "iops": 1664.7629262144185, 00:17:09.463 "mibps": 110.55066306892623, 00:17:09.463 "io_failed": 0, 00:17:09.463 "io_timeout": 0, 00:17:09.463 "avg_latency_us": 632.3045593733796, 00:17:09.463 "min_latency_us": 256.9309090909091, 00:17:09.463 "max_latency_us": 2144.8145454545456 00:17:09.463 } 00:17:09.463 ], 00:17:09.463 "core_count": 1 00:17:09.463 } 00:17:09.463 08:17:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:09.463 [2024-11-17 08:17:14.283955] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:09.463 Running I/O for 4 seconds... 00:17:11.335 7254.00 IOPS, 28.34 MiB/s [2024-11-17T08:17:17.725Z] 6932.00 IOPS, 27.08 MiB/s [2024-11-17T08:17:18.298Z] 6747.00 IOPS, 26.36 MiB/s [2024-11-17T08:17:18.557Z] 6666.25 IOPS, 26.04 MiB/s 00:17:13.545 Latency(us) 00:17:13.545 [2024-11-17T08:17:18.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.545 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.545 ftl0 : 4.02 6660.87 26.02 0.00 0.00 19159.99 327.68 34317.03 00:17:13.545 [2024-11-17T08:17:18.557Z] =================================================================================================================== 00:17:13.545 [2024-11-17T08:17:18.557Z] Total : 6660.87 26.02 0.00 0.00 19159.99 0.00 34317.03 00:17:13.545 { 00:17:13.545 "results": [ 00:17:13.545 { 00:17:13.545 "job": "ftl0", 00:17:13.545 "core_mask": "0x1", 00:17:13.545 "workload": "randwrite", 00:17:13.545 "status": "finished", 00:17:13.545 "queue_depth": 128, 00:17:13.545 "io_size": 4096, 00:17:13.545 "runtime": 4.021698, 00:17:13.545 "iops": 6660.8681208782955, 00:17:13.545 "mibps": 26.01901609718084, 00:17:13.545 "io_failed": 0, 00:17:13.545 "io_timeout": 0, 00:17:13.545 "avg_latency_us": 19159.992663200617, 00:17:13.545 "min_latency_us": 327.68, 00:17:13.545 "max_latency_us": 34317.03272727273 00:17:13.545 } 00:17:13.545 ], 00:17:13.545 "core_count": 1 00:17:13.545 } 00:17:13.545 [2024-11-17 08:17:18.315167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:13.545 08:17:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:13.545 [2024-11-17 08:17:18.470742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:13.545 Running I/O for 4 seconds... 00:17:15.860 4549.00 IOPS, 17.77 MiB/s [2024-11-17T08:17:21.810Z] 4572.50 IOPS, 17.86 MiB/s [2024-11-17T08:17:22.749Z] 4568.33 IOPS, 17.85 MiB/s [2024-11-17T08:17:22.749Z] 4565.50 IOPS, 17.83 MiB/s 00:17:17.737 Latency(us) 00:17:17.737 [2024-11-17T08:17:22.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.737 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:17.737 Verification LBA range: start 0x0 length 0x1400000 00:17:17.737 ftl0 : 4.02 4577.86 17.88 0.00 0.00 27854.27 405.88 30027.40 00:17:17.737 [2024-11-17T08:17:22.749Z] =================================================================================================================== 00:17:17.737 [2024-11-17T08:17:22.749Z] Total : 4577.86 17.88 0.00 0.00 27854.27 0.00 30027.40 00:17:17.737 [2024-11-17 08:17:22.503004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:17.737 { 00:17:17.737 "results": [ 00:17:17.737 { 00:17:17.737 "job": "ftl0", 00:17:17.737 "core_mask": "0x1", 00:17:17.737 "workload": "verify", 00:17:17.737 "status": "finished", 00:17:17.737 "verify_range": { 00:17:17.737 "start": 0, 00:17:17.737 "length": 20971520 00:17:17.737 }, 00:17:17.737 "queue_depth": 128, 00:17:17.737 "io_size": 4096, 00:17:17.737 "runtime": 4.016071, 00:17:17.737 "iops": 4577.85731377757, 00:17:17.737 "mibps": 17.882255131943634, 00:17:17.737 "io_failed": 0, 00:17:17.737 "io_timeout": 0, 00:17:17.737 "avg_latency_us": 27854.2725933691, 00:17:17.737 "min_latency_us": 405.87636363636364, 00:17:17.737 "max_latency_us": 30027.403636363637 00:17:17.737 } 00:17:17.737 ], 00:17:17.737 "core_count": 1 00:17:17.737 } 00:17:17.737 08:17:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:17.997 [2024-11-17 08:17:22.807839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.997 [2024-11-17 08:17:22.808039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:17.997 [2024-11-17 08:17:22.808071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.997 [2024-11-17 08:17:22.808087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.997 [2024-11-17 08:17:22.808159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.997 [2024-11-17 08:17:22.810966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.997 [2024-11-17 08:17:22.810996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:17.997 [2024-11-17 08:17:22.811011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:17:17.997 [2024-11-17 08:17:22.811020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.997 [2024-11-17 08:17:22.812915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.997 [2024-11-17 08:17:22.812970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:17.997 [2024-11-17 08:17:22.813018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.862 ms 00:17:17.997 [2024-11-17 08:17:22.813044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.997 [2024-11-17 08:17:22.985670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.997 [2024-11-17 08:17:22.985720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:17.997 [2024-11-17 08:17:22.985758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 172.585 ms 00:17:17.997 [2024-11-17 08:17:22.985769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.997 [2024-11-17 08:17:22.991113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.997 [2024-11-17 08:17:22.991144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:17.997 [2024-11-17 08:17:22.991160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:17:17.997 [2024-11-17 08:17:22.991170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.016891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.258 [2024-11-17 08:17:23.016930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.258 [2024-11-17 08:17:23.016949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.648 ms 00:17:18.258 [2024-11-17 08:17:23.016959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.032778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.258 [2024-11-17 08:17:23.032817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.258 [2024-11-17 08:17:23.032838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.774 ms 00:17:18.258 [2024-11-17 08:17:23.032848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.032995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.258 [2024-11-17 08:17:23.033014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.258 [2024-11-17 08:17:23.033029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:18.258 [2024-11-17 08:17:23.033039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.057682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.258 [2024-11-17 08:17:23.057719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:18.258 [2024-11-17 08:17:23.057737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.619 ms 00:17:18.258 [2024-11-17 08:17:23.057746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.082181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.258 [2024-11-17 08:17:23.082219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:18.258 [2024-11-17 08:17:23.082252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.392 ms 00:17:18.258 [2024-11-17 08:17:23.082262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.106167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.258 [2024-11-17 08:17:23.106206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.258 [2024-11-17 08:17:23.106239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.859 ms 00:17:18.258 [2024-11-17 08:17:23.106249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.130177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.258 [2024-11-17 08:17:23.130214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.258 [2024-11-17 08:17:23.130234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.845 ms 00:17:18.258 [2024-11-17 08:17:23.130244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.258 [2024-11-17 08:17:23.130292] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.258 [2024-11-17 08:17:23.130313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.258 [2024-11-17 08:17:23.130952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.130963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.130971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.130984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.130993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.259 [2024-11-17 08:17:23.131576] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.259 [2024-11-17 08:17:23.131589] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c99d9177-a6d2-4fa2-9f79-68de91ad7e5c 00:17:18.259 [2024-11-17 08:17:23.131600] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.259 [2024-11-17 08:17:23.131612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.259 [2024-11-17 08:17:23.131625] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.259 [2024-11-17 08:17:23.131637] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.259 [2024-11-17 08:17:23.131647] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.259 [2024-11-17 08:17:23.131659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.259 [2024-11-17 08:17:23.131669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.259 [2024-11-17 08:17:23.131683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.259 [2024-11-17 08:17:23.131692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.259 [2024-11-17 08:17:23.131705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.259 [2024-11-17 08:17:23.131715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.259 [2024-11-17 08:17:23.131730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.422 ms 00:17:18.259 [2024-11-17 08:17:23.131741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.259 [2024-11-17 08:17:23.145053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.259 [2024-11-17 08:17:23.145104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.259 [2024-11-17 08:17:23.145139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.269 ms 00:17:18.259 [2024-11-17 08:17:23.145150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.259 [2024-11-17 08:17:23.145655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.259 [2024-11-17 08:17:23.145686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.259 [2024-11-17 08:17:23.145703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:17:18.259 [2024-11-17 08:17:23.145713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.259 [2024-11-17 08:17:23.181276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.259 [2024-11-17 08:17:23.181316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.259 [2024-11-17 08:17:23.181351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.259 [2024-11-17 08:17:23.181362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.259 [2024-11-17 08:17:23.181418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.259 [2024-11-17 08:17:23.181433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.259 [2024-11-17 08:17:23.181445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.259 [2024-11-17 08:17:23.181455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.259 [2024-11-17 08:17:23.181558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.259 [2024-11-17 08:17:23.181578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.259 [2024-11-17 08:17:23.181590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.259 [2024-11-17 08:17:23.181599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.259 [2024-11-17 08:17:23.181622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.259 [2024-11-17 08:17:23.181634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.259 [2024-11-17 08:17:23.181645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.259 [2024-11-17 08:17:23.181655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.259 [2024-11-17 08:17:23.260488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.259 [2024-11-17 08:17:23.260548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.259 [2024-11-17 08:17:23.260568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.259 [2024-11-17 08:17:23.260578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.519 [2024-11-17 08:17:23.326946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.519 [2024-11-17 08:17:23.327199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.519 [2024-11-17 08:17:23.327233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.519 [2024-11-17 08:17:23.327246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.519 [2024-11-17 08:17:23.327403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.519 [2024-11-17 08:17:23.327424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.519 [2024-11-17 08:17:23.327443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.519 [2024-11-17 08:17:23.327453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.519 [2024-11-17 08:17:23.327542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.519 [2024-11-17 08:17:23.327558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.519 [2024-11-17 08:17:23.327571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.519 [2024-11-17 08:17:23.327581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.519 [2024-11-17 08:17:23.327704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.519 [2024-11-17 08:17:23.327738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.519 [2024-11-17 08:17:23.327771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.520 [2024-11-17 08:17:23.327781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.520 [2024-11-17 08:17:23.327853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.520 [2024-11-17 08:17:23.327870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.520 [2024-11-17 08:17:23.327882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.520 [2024-11-17 08:17:23.327891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.520 [2024-11-17 08:17:23.327933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.520 [2024-11-17 08:17:23.327947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.520 [2024-11-17 08:17:23.327958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.520 [2024-11-17 08:17:23.327969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.520 [2024-11-17 08:17:23.328017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.520 [2024-11-17 08:17:23.328042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.520 [2024-11-17 08:17:23.328055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.520 [2024-11-17 08:17:23.328065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.520 [2024-11-17 08:17:23.328211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.316 ms, result 0 00:17:18.520 true 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 74360 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 74360 ']' 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 74360 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74360 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.520 killing process with pid 74360 00:17:18.520 Received shutdown signal, test time was about 4.000000 seconds 00:17:18.520 00:17:18.520 Latency(us) 00:17:18.520 [2024-11-17T08:17:23.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.520 [2024-11-17T08:17:23.532Z] =================================================================================================================== 00:17:18.520 [2024-11-17T08:17:23.532Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74360' 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 74360 00:17:18.520 08:17:23 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 74360 00:17:21.813 Remove shared memory files 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:21.813 ************************************ 00:17:21.813 END TEST ftl_bdevperf 00:17:21.813 ************************************ 00:17:21.813 00:17:21.813 real 0m24.783s 00:17:21.813 user 0m28.448s 00:17:21.813 sys 0m1.019s 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.813 08:17:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:22.073 08:17:26 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:22.073 08:17:26 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:22.073 08:17:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.073 08:17:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:22.073 ************************************ 00:17:22.073 START TEST ftl_trim 00:17:22.073 ************************************ 00:17:22.073 08:17:26 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:22.073 * Looking for test storage... 00:17:22.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.073 08:17:26 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.073 08:17:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.073 08:17:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.073 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.073 08:17:27 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:22.073 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.073 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.073 --rc genhtml_branch_coverage=1 00:17:22.073 --rc genhtml_function_coverage=1 00:17:22.073 --rc genhtml_legend=1 00:17:22.073 --rc geninfo_all_blocks=1 00:17:22.073 --rc geninfo_unexecuted_blocks=1 00:17:22.073 00:17:22.073 ' 00:17:22.073 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.073 --rc genhtml_branch_coverage=1 00:17:22.073 --rc genhtml_function_coverage=1 00:17:22.073 --rc genhtml_legend=1 00:17:22.073 --rc geninfo_all_blocks=1 00:17:22.073 --rc geninfo_unexecuted_blocks=1 00:17:22.073 00:17:22.073 ' 00:17:22.073 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.073 --rc genhtml_branch_coverage=1 00:17:22.073 --rc genhtml_function_coverage=1 00:17:22.074 --rc genhtml_legend=1 00:17:22.074 --rc geninfo_all_blocks=1 00:17:22.074 --rc geninfo_unexecuted_blocks=1 00:17:22.074 00:17:22.074 ' 00:17:22.074 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.074 --rc genhtml_branch_coverage=1 00:17:22.074 --rc genhtml_function_coverage=1 00:17:22.074 --rc genhtml_legend=1 00:17:22.074 --rc geninfo_all_blocks=1 00:17:22.074 --rc geninfo_unexecuted_blocks=1 00:17:22.074 00:17:22.074 ' 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:22.074 08:17:27 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=74703 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 74703 00:17:22.334 08:17:27 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:22.335 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74703 ']' 00:17:22.335 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.335 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.335 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.335 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.335 08:17:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:22.335 [2024-11-17 08:17:27.212929] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:22.335 [2024-11-17 08:17:27.213388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74703 ] 00:17:22.594 [2024-11-17 08:17:27.392861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:22.594 [2024-11-17 08:17:27.481351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.594 [2024-11-17 08:17:27.481452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.594 [2024-11-17 08:17:27.481471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.531 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.531 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:23.531 08:17:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:23.531 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:23.531 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:23.531 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:23.531 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:23.791 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:24.050 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:24.050 { 00:17:24.050 "name": "nvme0n1", 00:17:24.050 "aliases": [ 00:17:24.050 "1fc91f01-6b2a-4098-9e7b-10d3e24e132b" 00:17:24.050 ], 00:17:24.050 "product_name": "NVMe disk", 00:17:24.050 "block_size": 4096, 00:17:24.050 "num_blocks": 1310720, 00:17:24.050 "uuid": "1fc91f01-6b2a-4098-9e7b-10d3e24e132b", 00:17:24.050 "numa_id": -1, 00:17:24.050 "assigned_rate_limits": { 00:17:24.050 "rw_ios_per_sec": 0, 00:17:24.050 "rw_mbytes_per_sec": 0, 00:17:24.050 "r_mbytes_per_sec": 0, 00:17:24.050 "w_mbytes_per_sec": 0 00:17:24.050 }, 00:17:24.050 "claimed": true, 00:17:24.050 "claim_type": "read_many_write_one", 00:17:24.050 "zoned": false, 00:17:24.050 "supported_io_types": { 00:17:24.050 "read": true, 00:17:24.050 "write": true, 00:17:24.050 "unmap": true, 00:17:24.050 "flush": true, 00:17:24.050 "reset": true, 00:17:24.050 "nvme_admin": true, 00:17:24.050 "nvme_io": true, 00:17:24.050 "nvme_io_md": false, 00:17:24.050 "write_zeroes": true, 00:17:24.050 "zcopy": false, 00:17:24.050 "get_zone_info": false, 00:17:24.050 "zone_management": false, 00:17:24.050 "zone_append": false, 00:17:24.050 "compare": true, 00:17:24.050 "compare_and_write": false, 00:17:24.050 "abort": true, 00:17:24.050 "seek_hole": false, 00:17:24.050 "seek_data": false, 00:17:24.050 "copy": true, 00:17:24.050 "nvme_iov_md": false 00:17:24.050 }, 00:17:24.050 "driver_specific": { 00:17:24.050 "nvme": [ 00:17:24.050 { 00:17:24.050 "pci_address": "0000:00:11.0", 00:17:24.050 "trid": { 00:17:24.050 "trtype": "PCIe", 00:17:24.050 "traddr": "0000:00:11.0" 00:17:24.050 }, 00:17:24.050 "ctrlr_data": { 00:17:24.050 "cntlid": 0, 00:17:24.050 "vendor_id": "0x1b36", 00:17:24.050 "model_number": "QEMU NVMe Ctrl", 00:17:24.050 "serial_number": "12341", 00:17:24.050 "firmware_revision": "8.0.0", 00:17:24.050 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:24.050 "oacs": { 00:17:24.050 "security": 0, 00:17:24.050 "format": 1, 00:17:24.050 "firmware": 0, 00:17:24.050 "ns_manage": 1 00:17:24.050 }, 00:17:24.050 "multi_ctrlr": false, 00:17:24.050 "ana_reporting": false 00:17:24.050 }, 00:17:24.050 "vs": { 00:17:24.050 "nvme_version": "1.4" 00:17:24.050 }, 00:17:24.050 "ns_data": { 00:17:24.050 "id": 1, 00:17:24.050 "can_share": false 00:17:24.050 } 00:17:24.050 } 00:17:24.050 ], 00:17:24.050 "mp_policy": "active_passive" 00:17:24.050 } 00:17:24.050 } 00:17:24.050 ]' 00:17:24.050 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:24.050 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:24.050 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:24.050 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:24.050 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:24.050 08:17:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:17:24.050 08:17:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:24.050 08:17:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:24.050 08:17:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:24.050 08:17:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:24.050 08:17:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:24.309 08:17:29 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=90b1f2f8-0520-4ea5-b8bb-ab9849a0a581 00:17:24.309 08:17:29 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:24.309 08:17:29 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90b1f2f8-0520-4ea5-b8bb-ab9849a0a581 00:17:24.567 08:17:29 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:24.826 08:17:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=8250dced-3d1d-4e22-96d1-736534a37479 00:17:24.826 08:17:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8250dced-3d1d-4e22-96d1-736534a37479 00:17:25.085 08:17:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.085 08:17:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.085 08:17:30 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:25.085 08:17:30 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:25.085 08:17:30 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.085 08:17:30 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:25.085 08:17:30 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.085 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.085 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.085 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:25.085 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:25.085 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.345 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:25.345 { 00:17:25.345 "name": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:25.345 "aliases": [ 00:17:25.345 "lvs/nvme0n1p0" 00:17:25.345 ], 00:17:25.345 "product_name": "Logical Volume", 00:17:25.345 "block_size": 4096, 00:17:25.345 "num_blocks": 26476544, 00:17:25.345 "uuid": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:25.345 "assigned_rate_limits": { 00:17:25.345 "rw_ios_per_sec": 0, 00:17:25.345 "rw_mbytes_per_sec": 0, 00:17:25.345 "r_mbytes_per_sec": 0, 00:17:25.345 "w_mbytes_per_sec": 0 00:17:25.345 }, 00:17:25.345 "claimed": false, 00:17:25.345 "zoned": false, 00:17:25.345 "supported_io_types": { 00:17:25.345 "read": true, 00:17:25.345 "write": true, 00:17:25.345 "unmap": true, 00:17:25.345 "flush": false, 00:17:25.345 "reset": true, 00:17:25.345 "nvme_admin": false, 00:17:25.345 "nvme_io": false, 00:17:25.345 "nvme_io_md": false, 00:17:25.345 "write_zeroes": true, 00:17:25.345 "zcopy": false, 00:17:25.345 "get_zone_info": false, 00:17:25.345 "zone_management": false, 00:17:25.345 "zone_append": false, 00:17:25.345 "compare": false, 00:17:25.345 "compare_and_write": false, 00:17:25.345 "abort": false, 00:17:25.345 "seek_hole": true, 00:17:25.345 "seek_data": true, 00:17:25.345 "copy": false, 00:17:25.345 "nvme_iov_md": false 00:17:25.345 }, 00:17:25.345 "driver_specific": { 00:17:25.345 "lvol": { 00:17:25.345 "lvol_store_uuid": "8250dced-3d1d-4e22-96d1-736534a37479", 00:17:25.345 "base_bdev": "nvme0n1", 00:17:25.345 "thin_provision": true, 00:17:25.345 "num_allocated_clusters": 0, 00:17:25.345 "snapshot": false, 00:17:25.345 "clone": false, 00:17:25.345 "esnap_clone": false 00:17:25.345 } 00:17:25.345 } 00:17:25.345 } 00:17:25.345 ]' 00:17:25.345 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:25.345 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:25.345 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:25.603 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:25.603 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:25.603 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:25.603 08:17:30 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:25.603 08:17:30 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:25.603 08:17:30 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:25.863 08:17:30 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:25.863 08:17:30 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:25.863 08:17:30 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.863 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=972aff10-e585-406c-b0a4-0478517c51b6 00:17:25.863 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:25.863 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:25.863 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:25.863 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 972aff10-e585-406c-b0a4-0478517c51b6 00:17:26.122 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:26.122 { 00:17:26.122 "name": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:26.122 "aliases": [ 00:17:26.122 "lvs/nvme0n1p0" 00:17:26.122 ], 00:17:26.122 "product_name": "Logical Volume", 00:17:26.122 "block_size": 4096, 00:17:26.122 "num_blocks": 26476544, 00:17:26.122 "uuid": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:26.122 "assigned_rate_limits": { 00:17:26.122 "rw_ios_per_sec": 0, 00:17:26.122 "rw_mbytes_per_sec": 0, 00:17:26.122 "r_mbytes_per_sec": 0, 00:17:26.122 "w_mbytes_per_sec": 0 00:17:26.122 }, 00:17:26.122 "claimed": false, 00:17:26.122 "zoned": false, 00:17:26.122 "supported_io_types": { 00:17:26.122 "read": true, 00:17:26.123 "write": true, 00:17:26.123 "unmap": true, 00:17:26.123 "flush": false, 00:17:26.123 "reset": true, 00:17:26.123 "nvme_admin": false, 00:17:26.123 "nvme_io": false, 00:17:26.123 "nvme_io_md": false, 00:17:26.123 "write_zeroes": true, 00:17:26.123 "zcopy": false, 00:17:26.123 "get_zone_info": false, 00:17:26.123 "zone_management": false, 00:17:26.123 "zone_append": false, 00:17:26.123 "compare": false, 00:17:26.123 "compare_and_write": false, 00:17:26.123 "abort": false, 00:17:26.123 "seek_hole": true, 00:17:26.123 "seek_data": true, 00:17:26.123 "copy": false, 00:17:26.123 "nvme_iov_md": false 00:17:26.123 }, 00:17:26.123 "driver_specific": { 00:17:26.123 "lvol": { 00:17:26.123 "lvol_store_uuid": "8250dced-3d1d-4e22-96d1-736534a37479", 00:17:26.123 "base_bdev": "nvme0n1", 00:17:26.123 "thin_provision": true, 00:17:26.123 "num_allocated_clusters": 0, 00:17:26.123 "snapshot": false, 00:17:26.123 "clone": false, 00:17:26.123 "esnap_clone": false 00:17:26.123 } 00:17:26.123 } 00:17:26.123 } 00:17:26.123 ]' 00:17:26.123 08:17:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:26.123 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:26.123 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:26.123 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:26.123 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:26.123 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:26.123 08:17:31 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:26.123 08:17:31 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:26.383 08:17:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:26.383 08:17:31 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:26.383 08:17:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 972aff10-e585-406c-b0a4-0478517c51b6 00:17:26.383 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=972aff10-e585-406c-b0a4-0478517c51b6 00:17:26.383 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:26.383 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:26.383 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:26.383 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 972aff10-e585-406c-b0a4-0478517c51b6 00:17:26.642 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:26.642 { 00:17:26.642 "name": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:26.642 "aliases": [ 00:17:26.642 "lvs/nvme0n1p0" 00:17:26.642 ], 00:17:26.642 "product_name": "Logical Volume", 00:17:26.642 "block_size": 4096, 00:17:26.642 "num_blocks": 26476544, 00:17:26.642 "uuid": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:26.642 "assigned_rate_limits": { 00:17:26.642 "rw_ios_per_sec": 0, 00:17:26.642 "rw_mbytes_per_sec": 0, 00:17:26.642 "r_mbytes_per_sec": 0, 00:17:26.642 "w_mbytes_per_sec": 0 00:17:26.642 }, 00:17:26.642 "claimed": false, 00:17:26.642 "zoned": false, 00:17:26.642 "supported_io_types": { 00:17:26.642 "read": true, 00:17:26.642 "write": true, 00:17:26.642 "unmap": true, 00:17:26.642 "flush": false, 00:17:26.642 "reset": true, 00:17:26.642 "nvme_admin": false, 00:17:26.642 "nvme_io": false, 00:17:26.642 "nvme_io_md": false, 00:17:26.642 "write_zeroes": true, 00:17:26.642 "zcopy": false, 00:17:26.642 "get_zone_info": false, 00:17:26.642 "zone_management": false, 00:17:26.642 "zone_append": false, 00:17:26.642 "compare": false, 00:17:26.642 "compare_and_write": false, 00:17:26.642 "abort": false, 00:17:26.642 "seek_hole": true, 00:17:26.642 "seek_data": true, 00:17:26.642 "copy": false, 00:17:26.642 "nvme_iov_md": false 00:17:26.642 }, 00:17:26.642 "driver_specific": { 00:17:26.642 "lvol": { 00:17:26.642 "lvol_store_uuid": "8250dced-3d1d-4e22-96d1-736534a37479", 00:17:26.642 "base_bdev": "nvme0n1", 00:17:26.642 "thin_provision": true, 00:17:26.642 "num_allocated_clusters": 0, 00:17:26.642 "snapshot": false, 00:17:26.642 "clone": false, 00:17:26.642 "esnap_clone": false 00:17:26.642 } 00:17:26.642 } 00:17:26.642 } 00:17:26.642 ]' 00:17:26.642 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:26.642 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:26.642 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:26.642 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:26.642 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:26.642 08:17:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:26.642 08:17:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:26.642 08:17:31 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 972aff10-e585-406c-b0a4-0478517c51b6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:26.903 [2024-11-17 08:17:31.886524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.886592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.903 [2024-11-17 08:17:31.886631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.903 [2024-11-17 08:17:31.886642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.889936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.889992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.903 [2024-11-17 08:17:31.890026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:17:26.903 [2024-11-17 08:17:31.890037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.890200] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.903 [2024-11-17 08:17:31.891183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.903 [2024-11-17 08:17:31.891251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.891266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.903 [2024-11-17 08:17:31.891280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:17:26.903 [2024-11-17 08:17:31.891291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.891575] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a6c485b1-efda-4eee-863c-5f40f9b95397 00:17:26.903 [2024-11-17 08:17:31.892675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.892732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:26.903 [2024-11-17 08:17:31.892763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:26.903 [2024-11-17 08:17:31.892775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.897217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.897296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.903 [2024-11-17 08:17:31.897316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.359 ms 00:17:26.903 [2024-11-17 08:17:31.897328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.897493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.897518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.903 [2024-11-17 08:17:31.897562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:26.903 [2024-11-17 08:17:31.897595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.897642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.897671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.903 [2024-11-17 08:17:31.897684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:26.903 [2024-11-17 08:17:31.897697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.897739] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.903 [2024-11-17 08:17:31.901852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.901903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.903 [2024-11-17 08:17:31.901941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.118 ms 00:17:26.903 [2024-11-17 08:17:31.901952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.902023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.902041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.903 [2024-11-17 08:17:31.902055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:26.903 [2024-11-17 08:17:31.902112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.902169] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:26.903 [2024-11-17 08:17:31.902334] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.903 [2024-11-17 08:17:31.902357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.903 [2024-11-17 08:17:31.902372] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:26.903 [2024-11-17 08:17:31.902388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.903 [2024-11-17 08:17:31.902400] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.903 [2024-11-17 08:17:31.902415] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.903 [2024-11-17 08:17:31.902425] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.903 [2024-11-17 08:17:31.902438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.903 [2024-11-17 08:17:31.902450] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.903 [2024-11-17 08:17:31.902464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.902474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.903 [2024-11-17 08:17:31.902488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:17:26.903 [2024-11-17 08:17:31.902498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.903 [2024-11-17 08:17:31.902603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.903 [2024-11-17 08:17:31.902617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.903 [2024-11-17 08:17:31.902631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:26.904 [2024-11-17 08:17:31.902642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.904 [2024-11-17 08:17:31.902776] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.904 [2024-11-17 08:17:31.902792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.904 [2024-11-17 08:17:31.902806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.904 [2024-11-17 08:17:31.902817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.904 [2024-11-17 08:17:31.902831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.904 [2024-11-17 08:17:31.902841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.904 [2024-11-17 08:17:31.902853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.904 [2024-11-17 08:17:31.902862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.904 [2024-11-17 08:17:31.902875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.904 [2024-11-17 08:17:31.902884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.904 [2024-11-17 08:17:31.902896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.904 [2024-11-17 08:17:31.902906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.904 [2024-11-17 08:17:31.902917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.904 [2024-11-17 08:17:31.902927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.904 [2024-11-17 08:17:31.902939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.904 [2024-11-17 08:17:31.902949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.904 [2024-11-17 08:17:31.902963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.904 [2024-11-17 08:17:31.902973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.904 [2024-11-17 08:17:31.902984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.904 [2024-11-17 08:17:31.902994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.904 [2024-11-17 08:17:31.903008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.904 [2024-11-17 08:17:31.903030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.904 [2024-11-17 08:17:31.903040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.904 [2024-11-17 08:17:31.903079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.904 [2024-11-17 08:17:31.903092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.904 [2024-11-17 08:17:31.903114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.904 [2024-11-17 08:17:31.903138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.904 [2024-11-17 08:17:31.903165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.904 [2024-11-17 08:17:31.903179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.904 [2024-11-17 08:17:31.903201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.904 [2024-11-17 08:17:31.903212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.904 [2024-11-17 08:17:31.903223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.904 [2024-11-17 08:17:31.903233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.904 [2024-11-17 08:17:31.903246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.904 [2024-11-17 08:17:31.903256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.904 [2024-11-17 08:17:31.903278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.904 [2024-11-17 08:17:31.903290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.904 [2024-11-17 08:17:31.903313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.904 [2024-11-17 08:17:31.903324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.904 [2024-11-17 08:17:31.903337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.904 [2024-11-17 08:17:31.903349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.904 [2024-11-17 08:17:31.903392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.904 [2024-11-17 08:17:31.903404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.904 [2024-11-17 08:17:31.903417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.904 [2024-11-17 08:17:31.903428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.904 [2024-11-17 08:17:31.903440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.904 [2024-11-17 08:17:31.903455] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.904 [2024-11-17 08:17:31.903472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.904 [2024-11-17 08:17:31.903485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.904 [2024-11-17 08:17:31.903499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.904 [2024-11-17 08:17:31.903511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.904 [2024-11-17 08:17:31.903524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.904 [2024-11-17 08:17:31.903536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.904 [2024-11-17 08:17:31.903549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.904 [2024-11-17 08:17:31.903561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.904 [2024-11-17 08:17:31.903574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.904 [2024-11-17 08:17:31.903586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.904 [2024-11-17 08:17:31.903601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.904 [2024-11-17 08:17:31.903613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.904 [2024-11-17 08:17:31.903626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.904 [2024-11-17 08:17:31.903637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.904 [2024-11-17 08:17:31.903666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.904 [2024-11-17 08:17:31.903677] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.904 [2024-11-17 08:17:31.903699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.904 [2024-11-17 08:17:31.903726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.904 [2024-11-17 08:17:31.903739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.904 [2024-11-17 08:17:31.903750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.904 [2024-11-17 08:17:31.903762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.904 [2024-11-17 08:17:31.903774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.904 [2024-11-17 08:17:31.903787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.904 [2024-11-17 08:17:31.903799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:17:26.904 [2024-11-17 08:17:31.903812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.904 [2024-11-17 08:17:31.903898] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:26.904 [2024-11-17 08:17:31.903931] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:29.474 [2024-11-17 08:17:34.191798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.191882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:29.474 [2024-11-17 08:17:34.191917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2287.898 ms 00:17:29.474 [2024-11-17 08:17:34.191931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.222602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.222669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.474 [2024-11-17 08:17:34.222688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.277 ms 00:17:29.474 [2024-11-17 08:17:34.222701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.222893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.222916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.474 [2024-11-17 08:17:34.222929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:29.474 [2024-11-17 08:17:34.222958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.270477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.270584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.474 [2024-11-17 08:17:34.270618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.434 ms 00:17:29.474 [2024-11-17 08:17:34.270633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.270762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.270786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.474 [2024-11-17 08:17:34.270799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.474 [2024-11-17 08:17:34.270811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.271199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.271235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.474 [2024-11-17 08:17:34.271250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:17:29.474 [2024-11-17 08:17:34.271262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.271449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.271469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.474 [2024-11-17 08:17:34.271481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:29.474 [2024-11-17 08:17:34.271497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.287488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.287534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.474 [2024-11-17 08:17:34.287566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.927 ms 00:17:29.474 [2024-11-17 08:17:34.287578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.474 [2024-11-17 08:17:34.299117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:29.474 [2024-11-17 08:17:34.312651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.474 [2024-11-17 08:17:34.312729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.474 [2024-11-17 08:17:34.312765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.933 ms 00:17:29.475 [2024-11-17 08:17:34.312777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.475 [2024-11-17 08:17:34.377195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.475 [2024-11-17 08:17:34.377292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:29.475 [2024-11-17 08:17:34.377316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.282 ms 00:17:29.475 [2024-11-17 08:17:34.377328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.475 [2024-11-17 08:17:34.377649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.475 [2024-11-17 08:17:34.377681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.475 [2024-11-17 08:17:34.377701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:29.475 [2024-11-17 08:17:34.377713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.475 [2024-11-17 08:17:34.405705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.475 [2024-11-17 08:17:34.405776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:29.475 [2024-11-17 08:17:34.405811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.952 ms 00:17:29.475 [2024-11-17 08:17:34.405823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.475 [2024-11-17 08:17:34.433457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.475 [2024-11-17 08:17:34.433512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:29.475 [2024-11-17 08:17:34.433547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.528 ms 00:17:29.475 [2024-11-17 08:17:34.433558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.475 [2024-11-17 08:17:34.434381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.475 [2024-11-17 08:17:34.434429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.475 [2024-11-17 08:17:34.434461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:17:29.475 [2024-11-17 08:17:34.434472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.734 [2024-11-17 08:17:34.517598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.734 [2024-11-17 08:17:34.517662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:29.734 [2024-11-17 08:17:34.517705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.081 ms 00:17:29.734 [2024-11-17 08:17:34.517716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.734 [2024-11-17 08:17:34.546250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.734 [2024-11-17 08:17:34.546304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:29.734 [2024-11-17 08:17:34.546338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.410 ms 00:17:29.734 [2024-11-17 08:17:34.546349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.734 [2024-11-17 08:17:34.573229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.734 [2024-11-17 08:17:34.573266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:29.734 [2024-11-17 08:17:34.573299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.782 ms 00:17:29.734 [2024-11-17 08:17:34.573310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.734 [2024-11-17 08:17:34.600539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.734 [2024-11-17 08:17:34.600593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.734 [2024-11-17 08:17:34.600627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.133 ms 00:17:29.734 [2024-11-17 08:17:34.600655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.734 [2024-11-17 08:17:34.600761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.734 [2024-11-17 08:17:34.600782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.734 [2024-11-17 08:17:34.600798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:29.734 [2024-11-17 08:17:34.600808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.734 [2024-11-17 08:17:34.600930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.734 [2024-11-17 08:17:34.600962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.734 [2024-11-17 08:17:34.600977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:29.734 [2024-11-17 08:17:34.600987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.734 [2024-11-17 08:17:34.602072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.734 [2024-11-17 08:17:34.605858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2715.174 ms, result 0 00:17:29.734 [2024-11-17 08:17:34.606836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.734 { 00:17:29.734 "name": "ftl0", 00:17:29.734 "uuid": "a6c485b1-efda-4eee-863c-5f40f9b95397" 00:17:29.734 } 00:17:29.734 08:17:34 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:29.734 08:17:34 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:29.734 08:17:34 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:29.734 08:17:34 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:17:29.734 08:17:34 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:29.734 08:17:34 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:29.734 08:17:34 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.993 08:17:34 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:30.252 [ 00:17:30.252 { 00:17:30.252 "name": "ftl0", 00:17:30.252 "aliases": [ 00:17:30.252 "a6c485b1-efda-4eee-863c-5f40f9b95397" 00:17:30.252 ], 00:17:30.252 "product_name": "FTL disk", 00:17:30.252 "block_size": 4096, 00:17:30.252 "num_blocks": 23592960, 00:17:30.252 "uuid": "a6c485b1-efda-4eee-863c-5f40f9b95397", 00:17:30.252 "assigned_rate_limits": { 00:17:30.252 "rw_ios_per_sec": 0, 00:17:30.252 "rw_mbytes_per_sec": 0, 00:17:30.252 "r_mbytes_per_sec": 0, 00:17:30.252 "w_mbytes_per_sec": 0 00:17:30.252 }, 00:17:30.252 "claimed": false, 00:17:30.252 "zoned": false, 00:17:30.252 "supported_io_types": { 00:17:30.252 "read": true, 00:17:30.252 "write": true, 00:17:30.252 "unmap": true, 00:17:30.252 "flush": true, 00:17:30.252 "reset": false, 00:17:30.252 "nvme_admin": false, 00:17:30.252 "nvme_io": false, 00:17:30.252 "nvme_io_md": false, 00:17:30.252 "write_zeroes": true, 00:17:30.252 "zcopy": false, 00:17:30.252 "get_zone_info": false, 00:17:30.252 "zone_management": false, 00:17:30.252 "zone_append": false, 00:17:30.252 "compare": false, 00:17:30.252 "compare_and_write": false, 00:17:30.252 "abort": false, 00:17:30.252 "seek_hole": false, 00:17:30.252 "seek_data": false, 00:17:30.252 "copy": false, 00:17:30.252 "nvme_iov_md": false 00:17:30.252 }, 00:17:30.252 "driver_specific": { 00:17:30.252 "ftl": { 00:17:30.252 "base_bdev": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:30.252 "cache": "nvc0n1p0" 00:17:30.252 } 00:17:30.252 } 00:17:30.252 } 00:17:30.252 ] 00:17:30.252 08:17:35 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:17:30.252 08:17:35 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:30.252 08:17:35 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:30.511 08:17:35 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:30.511 08:17:35 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:30.770 08:17:35 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:30.770 { 00:17:30.770 "name": "ftl0", 00:17:30.770 "aliases": [ 00:17:30.770 "a6c485b1-efda-4eee-863c-5f40f9b95397" 00:17:30.770 ], 00:17:30.770 "product_name": "FTL disk", 00:17:30.770 "block_size": 4096, 00:17:30.770 "num_blocks": 23592960, 00:17:30.770 "uuid": "a6c485b1-efda-4eee-863c-5f40f9b95397", 00:17:30.770 "assigned_rate_limits": { 00:17:30.770 "rw_ios_per_sec": 0, 00:17:30.770 "rw_mbytes_per_sec": 0, 00:17:30.770 "r_mbytes_per_sec": 0, 00:17:30.770 "w_mbytes_per_sec": 0 00:17:30.770 }, 00:17:30.770 "claimed": false, 00:17:30.770 "zoned": false, 00:17:30.770 "supported_io_types": { 00:17:30.770 "read": true, 00:17:30.770 "write": true, 00:17:30.770 "unmap": true, 00:17:30.770 "flush": true, 00:17:30.770 "reset": false, 00:17:30.770 "nvme_admin": false, 00:17:30.770 "nvme_io": false, 00:17:30.770 "nvme_io_md": false, 00:17:30.770 "write_zeroes": true, 00:17:30.770 "zcopy": false, 00:17:30.770 "get_zone_info": false, 00:17:30.770 "zone_management": false, 00:17:30.770 "zone_append": false, 00:17:30.770 "compare": false, 00:17:30.770 "compare_and_write": false, 00:17:30.770 "abort": false, 00:17:30.770 "seek_hole": false, 00:17:30.770 "seek_data": false, 00:17:30.770 "copy": false, 00:17:30.770 "nvme_iov_md": false 00:17:30.770 }, 00:17:30.770 "driver_specific": { 00:17:30.770 "ftl": { 00:17:30.770 "base_bdev": "972aff10-e585-406c-b0a4-0478517c51b6", 00:17:30.770 "cache": "nvc0n1p0" 00:17:30.770 } 00:17:30.770 } 00:17:30.770 } 00:17:30.770 ]' 00:17:30.770 08:17:35 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:31.029 08:17:35 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:31.029 08:17:35 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:31.029 [2024-11-17 08:17:35.993352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.029 [2024-11-17 08:17:35.993424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:31.029 [2024-11-17 08:17:35.993446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.029 [2024-11-17 08:17:35.993462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.029 [2024-11-17 08:17:35.993500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:31.029 [2024-11-17 08:17:35.996539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.029 [2024-11-17 08:17:35.996586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:31.029 [2024-11-17 08:17:35.996623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.014 ms 00:17:31.029 [2024-11-17 08:17:35.996637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.029 [2024-11-17 08:17:35.997229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.029 [2024-11-17 08:17:35.997263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:31.029 [2024-11-17 08:17:35.997280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:17:31.029 [2024-11-17 08:17:35.997291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.029 [2024-11-17 08:17:36.000685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.029 [2024-11-17 08:17:36.000731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:31.029 [2024-11-17 08:17:36.000762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.361 ms 00:17:31.029 [2024-11-17 08:17:36.000772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.029 [2024-11-17 08:17:36.007217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.029 [2024-11-17 08:17:36.007263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:31.029 [2024-11-17 08:17:36.007312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.386 ms 00:17:31.029 [2024-11-17 08:17:36.007323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.029 [2024-11-17 08:17:36.033934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.030 [2024-11-17 08:17:36.033988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:31.030 [2024-11-17 08:17:36.034024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.489 ms 00:17:31.030 [2024-11-17 08:17:36.034034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.290 [2024-11-17 08:17:36.051800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.290 [2024-11-17 08:17:36.051852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:31.290 [2024-11-17 08:17:36.051887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.645 ms 00:17:31.290 [2024-11-17 08:17:36.051900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.290 [2024-11-17 08:17:36.052214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.290 [2024-11-17 08:17:36.052243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:31.290 [2024-11-17 08:17:36.052260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:17:31.290 [2024-11-17 08:17:36.052272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.290 [2024-11-17 08:17:36.079206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.290 [2024-11-17 08:17:36.079260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:31.290 [2024-11-17 08:17:36.079294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.896 ms 00:17:31.290 [2024-11-17 08:17:36.079304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.290 [2024-11-17 08:17:36.106327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.290 [2024-11-17 08:17:36.106380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:31.290 [2024-11-17 08:17:36.106415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.909 ms 00:17:31.290 [2024-11-17 08:17:36.106425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.290 [2024-11-17 08:17:36.132733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.290 [2024-11-17 08:17:36.132787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:31.290 [2024-11-17 08:17:36.132820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.215 ms 00:17:31.290 [2024-11-17 08:17:36.132830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.290 [2024-11-17 08:17:36.158935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.290 [2024-11-17 08:17:36.158987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:31.290 [2024-11-17 08:17:36.159021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.953 ms 00:17:31.290 [2024-11-17 08:17:36.159032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.290 [2024-11-17 08:17:36.159139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:31.290 [2024-11-17 08:17:36.159163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:31.290 [2024-11-17 08:17:36.159551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.159989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:31.291 [2024-11-17 08:17:36.160530] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:31.291 [2024-11-17 08:17:36.160545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6c485b1-efda-4eee-863c-5f40f9b95397 00:17:31.291 [2024-11-17 08:17:36.160555] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:31.291 [2024-11-17 08:17:36.160567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:31.291 [2024-11-17 08:17:36.160577] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:31.291 [2024-11-17 08:17:36.160589] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:31.291 [2024-11-17 08:17:36.160601] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:31.291 [2024-11-17 08:17:36.160614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:31.291 [2024-11-17 08:17:36.160624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:31.291 [2024-11-17 08:17:36.160634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:31.291 [2024-11-17 08:17:36.160643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:31.291 [2024-11-17 08:17:36.160656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.291 [2024-11-17 08:17:36.160666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:31.291 [2024-11-17 08:17:36.160680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:17:31.291 [2024-11-17 08:17:36.160690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.291 [2024-11-17 08:17:36.175207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.291 [2024-11-17 08:17:36.175257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:31.291 [2024-11-17 08:17:36.175296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.474 ms 00:17:31.292 [2024-11-17 08:17:36.175307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.292 [2024-11-17 08:17:36.175822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.292 [2024-11-17 08:17:36.175857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:31.292 [2024-11-17 08:17:36.175873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:17:31.292 [2024-11-17 08:17:36.175884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.292 [2024-11-17 08:17:36.225062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.292 [2024-11-17 08:17:36.225134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.292 [2024-11-17 08:17:36.225169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.292 [2024-11-17 08:17:36.225179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.292 [2024-11-17 08:17:36.225296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.292 [2024-11-17 08:17:36.225314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.292 [2024-11-17 08:17:36.225327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.292 [2024-11-17 08:17:36.225337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.292 [2024-11-17 08:17:36.225449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.292 [2024-11-17 08:17:36.225468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.292 [2024-11-17 08:17:36.225488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.292 [2024-11-17 08:17:36.225498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.292 [2024-11-17 08:17:36.225539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.292 [2024-11-17 08:17:36.225553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.292 [2024-11-17 08:17:36.225566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.292 [2024-11-17 08:17:36.225577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.551 [2024-11-17 08:17:36.322838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.322910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.552 [2024-11-17 08:17:36.322946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.322957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.393421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.393490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.552 [2024-11-17 08:17:36.393525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.393536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.393645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.393664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.552 [2024-11-17 08:17:36.393716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.393729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.393823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.393837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.552 [2024-11-17 08:17:36.393851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.393861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.393999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.394018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.552 [2024-11-17 08:17:36.394033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.394044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.394137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.394156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:31.552 [2024-11-17 08:17:36.394170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.394180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.394261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.394276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.552 [2024-11-17 08:17:36.394293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.394303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.394377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.552 [2024-11-17 08:17:36.394394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.552 [2024-11-17 08:17:36.394408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.552 [2024-11-17 08:17:36.394419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.552 [2024-11-17 08:17:36.394640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 401.275 ms, result 0 00:17:31.552 true 00:17:31.552 08:17:36 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 74703 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74703 ']' 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74703 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74703 00:17:31.552 killing process with pid 74703 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74703' 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74703 00:17:31.552 08:17:36 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74703 00:17:36.822 08:17:40 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:37.081 65536+0 records in 00:17:37.081 65536+0 records out 00:17:37.081 268435456 bytes (268 MB, 256 MiB) copied, 0.975377 s, 275 MB/s 00:17:37.081 08:17:41 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.081 [2024-11-17 08:17:41.900910] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:37.081 [2024-11-17 08:17:41.901055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74891 ] 00:17:37.081 [2024-11-17 08:17:42.062687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.340 [2024-11-17 08:17:42.142775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.600 [2024-11-17 08:17:42.408380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.600 [2024-11-17 08:17:42.408468] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.600 [2024-11-17 08:17:42.565587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.565634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:37.600 [2024-11-17 08:17:42.565666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.600 [2024-11-17 08:17:42.565676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.568402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.568440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.600 [2024-11-17 08:17:42.568470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.700 ms 00:17:37.600 [2024-11-17 08:17:42.568480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.568588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:37.600 [2024-11-17 08:17:42.569493] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:37.600 [2024-11-17 08:17:42.569548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.569562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.600 [2024-11-17 08:17:42.569573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:17:37.600 [2024-11-17 08:17:42.569583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.570862] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:37.600 [2024-11-17 08:17:42.584520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.584562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:37.600 [2024-11-17 08:17:42.584593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.658 ms 00:17:37.600 [2024-11-17 08:17:42.584603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.584708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.584728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:37.600 [2024-11-17 08:17:42.584739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:37.600 [2024-11-17 08:17:42.584749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.589117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.589169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.600 [2024-11-17 08:17:42.589183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.321 ms 00:17:37.600 [2024-11-17 08:17:42.589192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.589295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.589313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.600 [2024-11-17 08:17:42.589324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:37.600 [2024-11-17 08:17:42.589333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.589366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.589384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:37.600 [2024-11-17 08:17:42.589425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:37.600 [2024-11-17 08:17:42.589435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.589466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:37.600 [2024-11-17 08:17:42.592965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.592998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.600 [2024-11-17 08:17:42.593027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.507 ms 00:17:37.600 [2024-11-17 08:17:42.593036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.593078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.600 [2024-11-17 08:17:42.593123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:37.600 [2024-11-17 08:17:42.593139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:37.600 [2024-11-17 08:17:42.593149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.600 [2024-11-17 08:17:42.593176] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:37.600 [2024-11-17 08:17:42.593204] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:37.601 [2024-11-17 08:17:42.593259] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:37.601 [2024-11-17 08:17:42.593292] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:37.601 [2024-11-17 08:17:42.593406] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:37.601 [2024-11-17 08:17:42.593420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:37.601 [2024-11-17 08:17:42.593433] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:37.601 [2024-11-17 08:17:42.593446] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:37.601 [2024-11-17 08:17:42.593463] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:37.601 [2024-11-17 08:17:42.593476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:37.601 [2024-11-17 08:17:42.593486] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:37.601 [2024-11-17 08:17:42.593496] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:37.601 [2024-11-17 08:17:42.593519] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:37.601 [2024-11-17 08:17:42.593530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.601 [2024-11-17 08:17:42.593539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:37.601 [2024-11-17 08:17:42.593550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:17:37.601 [2024-11-17 08:17:42.593559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.601 [2024-11-17 08:17:42.593668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.601 [2024-11-17 08:17:42.593696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:37.601 [2024-11-17 08:17:42.593715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:37.601 [2024-11-17 08:17:42.593725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.601 [2024-11-17 08:17:42.593823] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:37.601 [2024-11-17 08:17:42.593839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:37.601 [2024-11-17 08:17:42.593850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.601 [2024-11-17 08:17:42.593860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.601 [2024-11-17 08:17:42.593869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:37.601 [2024-11-17 08:17:42.593878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:37.601 [2024-11-17 08:17:42.593887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:37.601 [2024-11-17 08:17:42.593897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:37.601 [2024-11-17 08:17:42.593906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:37.601 [2024-11-17 08:17:42.593915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.601 [2024-11-17 08:17:42.593924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:37.601 [2024-11-17 08:17:42.593933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:37.601 [2024-11-17 08:17:42.593942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.601 [2024-11-17 08:17:42.593964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:37.601 [2024-11-17 08:17:42.593974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:37.601 [2024-11-17 08:17:42.593983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.601 [2024-11-17 08:17:42.593992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:37.601 [2024-11-17 08:17:42.594001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:37.601 [2024-11-17 08:17:42.594009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:37.601 [2024-11-17 08:17:42.594027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.601 [2024-11-17 08:17:42.594045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:37.601 [2024-11-17 08:17:42.594055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.601 [2024-11-17 08:17:42.594073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:37.601 [2024-11-17 08:17:42.594120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.601 [2024-11-17 08:17:42.594142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:37.601 [2024-11-17 08:17:42.594151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.601 [2024-11-17 08:17:42.594169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:37.601 [2024-11-17 08:17:42.594179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.601 [2024-11-17 08:17:42.594197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:37.601 [2024-11-17 08:17:42.594220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:37.601 [2024-11-17 08:17:42.594244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.601 [2024-11-17 08:17:42.594253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:37.601 [2024-11-17 08:17:42.594263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:37.601 [2024-11-17 08:17:42.594272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:37.601 [2024-11-17 08:17:42.594290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:37.601 [2024-11-17 08:17:42.594299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:37.601 [2024-11-17 08:17:42.594319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:37.601 [2024-11-17 08:17:42.594329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.601 [2024-11-17 08:17:42.594352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.601 [2024-11-17 08:17:42.594363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:37.601 [2024-11-17 08:17:42.594372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:37.601 [2024-11-17 08:17:42.594382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:37.601 [2024-11-17 08:17:42.594391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:37.601 [2024-11-17 08:17:42.594400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:37.601 [2024-11-17 08:17:42.594409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:37.601 [2024-11-17 08:17:42.594420] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:37.601 [2024-11-17 08:17:42.594433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.601 [2024-11-17 08:17:42.594458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:37.601 [2024-11-17 08:17:42.594468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:37.601 [2024-11-17 08:17:42.594478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:37.601 [2024-11-17 08:17:42.594487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:37.601 [2024-11-17 08:17:42.594497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:37.601 [2024-11-17 08:17:42.594507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:37.601 [2024-11-17 08:17:42.594518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:37.601 [2024-11-17 08:17:42.594527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:37.601 [2024-11-17 08:17:42.594537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:37.601 [2024-11-17 08:17:42.594547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:37.601 [2024-11-17 08:17:42.594557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:37.601 [2024-11-17 08:17:42.594566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:37.601 [2024-11-17 08:17:42.594576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:37.601 [2024-11-17 08:17:42.594586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:37.601 [2024-11-17 08:17:42.594596] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:37.601 [2024-11-17 08:17:42.594606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.601 [2024-11-17 08:17:42.594617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:37.601 [2024-11-17 08:17:42.594627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:37.601 [2024-11-17 08:17:42.594637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:37.601 [2024-11-17 08:17:42.594647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:37.601 [2024-11-17 08:17:42.594658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.601 [2024-11-17 08:17:42.594668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:37.601 [2024-11-17 08:17:42.594683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:17:37.601 [2024-11-17 08:17:42.594693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.861 [2024-11-17 08:17:42.622408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.861 [2024-11-17 08:17:42.622460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.861 [2024-11-17 08:17:42.622493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.633 ms 00:17:37.861 [2024-11-17 08:17:42.622503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.861 [2024-11-17 08:17:42.622658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.861 [2024-11-17 08:17:42.622682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:37.861 [2024-11-17 08:17:42.622693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:37.861 [2024-11-17 08:17:42.622703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.861 [2024-11-17 08:17:42.667586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.861 [2024-11-17 08:17:42.667663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.861 [2024-11-17 08:17:42.667708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.840 ms 00:17:37.861 [2024-11-17 08:17:42.667723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.861 [2024-11-17 08:17:42.667857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.667876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.862 [2024-11-17 08:17:42.667888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:37.862 [2024-11-17 08:17:42.667897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.668261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.668288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.862 [2024-11-17 08:17:42.668301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:17:37.862 [2024-11-17 08:17:42.668317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.668457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.668475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.862 [2024-11-17 08:17:42.668487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:37.862 [2024-11-17 08:17:42.668497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.682484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.682527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.862 [2024-11-17 08:17:42.682558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.960 ms 00:17:37.862 [2024-11-17 08:17:42.682568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.695495] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:37.862 [2024-11-17 08:17:42.695550] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:37.862 [2024-11-17 08:17:42.695567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.695577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:37.862 [2024-11-17 08:17:42.695589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.877 ms 00:17:37.862 [2024-11-17 08:17:42.695599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.718836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.718874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:37.862 [2024-11-17 08:17:42.718917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.151 ms 00:17:37.862 [2024-11-17 08:17:42.718927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.731471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.731526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:37.862 [2024-11-17 08:17:42.731541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.454 ms 00:17:37.862 [2024-11-17 08:17:42.731551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.743831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.743866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:37.862 [2024-11-17 08:17:42.743895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.192 ms 00:17:37.862 [2024-11-17 08:17:42.743904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.744672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.744719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.862 [2024-11-17 08:17:42.744733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:17:37.862 [2024-11-17 08:17:42.744744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.802208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.802289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:37.862 [2024-11-17 08:17:42.802324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.433 ms 00:17:37.862 [2024-11-17 08:17:42.802334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.812212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:37.862 [2024-11-17 08:17:42.823706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.823758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:37.862 [2024-11-17 08:17:42.823790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.223 ms 00:17:37.862 [2024-11-17 08:17:42.823800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.823921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.823943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:37.862 [2024-11-17 08:17:42.823955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:37.862 [2024-11-17 08:17:42.823964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.824024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.824055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.862 [2024-11-17 08:17:42.824081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:37.862 [2024-11-17 08:17:42.824091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.824142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.824159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.862 [2024-11-17 08:17:42.824174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:37.862 [2024-11-17 08:17:42.824184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.824225] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:37.862 [2024-11-17 08:17:42.824240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.824250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:37.862 [2024-11-17 08:17:42.824261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:37.862 [2024-11-17 08:17:42.824271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.848839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.848884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.862 [2024-11-17 08:17:42.848915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.544 ms 00:17:37.862 [2024-11-17 08:17:42.848925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.849030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.862 [2024-11-17 08:17:42.849050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.862 [2024-11-17 08:17:42.849061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:37.862 [2024-11-17 08:17:42.849071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.862 [2024-11-17 08:17:42.850478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.862 [2024-11-17 08:17:42.853895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.523 ms, result 0 00:17:37.862 [2024-11-17 08:17:42.854732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.862 [2024-11-17 08:17:42.868726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:39.241  [2024-11-17T08:17:45.191Z] Copying: 21/256 [MB] (21 MBps) [2024-11-17T08:17:46.130Z] Copying: 43/256 [MB] (21 MBps) [2024-11-17T08:17:47.068Z] Copying: 65/256 [MB] (21 MBps) [2024-11-17T08:17:48.008Z] Copying: 87/256 [MB] (21 MBps) [2024-11-17T08:17:48.945Z] Copying: 109/256 [MB] (22 MBps) [2024-11-17T08:17:49.881Z] Copying: 131/256 [MB] (22 MBps) [2024-11-17T08:17:51.260Z] Copying: 153/256 [MB] (21 MBps) [2024-11-17T08:17:52.197Z] Copying: 176/256 [MB] (22 MBps) [2024-11-17T08:17:53.134Z] Copying: 198/256 [MB] (22 MBps) [2024-11-17T08:17:54.071Z] Copying: 219/256 [MB] (21 MBps) [2024-11-17T08:17:54.639Z] Copying: 241/256 [MB] (21 MBps) [2024-11-17T08:17:54.639Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-17 08:17:54.539950] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.627 [2024-11-17 08:17:54.550509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.627 [2024-11-17 08:17:54.550591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.627 [2024-11-17 08:17:54.550632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.627 [2024-11-17 08:17:54.550650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.627 [2024-11-17 08:17:54.550694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:49.627 [2024-11-17 08:17:54.553942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.627 [2024-11-17 08:17:54.554008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.627 [2024-11-17 08:17:54.554039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:17:49.627 [2024-11-17 08:17:54.554050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.627 [2024-11-17 08:17:54.555914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.627 [2024-11-17 08:17:54.555950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.627 [2024-11-17 08:17:54.555979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.829 ms 00:17:49.627 [2024-11-17 08:17:54.555989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.627 [2024-11-17 08:17:54.562467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.627 [2024-11-17 08:17:54.562533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.627 [2024-11-17 08:17:54.562569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.456 ms 00:17:49.627 [2024-11-17 08:17:54.562579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.627 [2024-11-17 08:17:54.568470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.627 [2024-11-17 08:17:54.568502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.627 [2024-11-17 08:17:54.568531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.836 ms 00:17:49.627 [2024-11-17 08:17:54.568540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.627 [2024-11-17 08:17:54.592814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.628 [2024-11-17 08:17:54.592849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.628 [2024-11-17 08:17:54.592878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.226 ms 00:17:49.628 [2024-11-17 08:17:54.592887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.628 [2024-11-17 08:17:54.607708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.628 [2024-11-17 08:17:54.607744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.628 [2024-11-17 08:17:54.607778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.779 ms 00:17:49.628 [2024-11-17 08:17:54.607791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.628 [2024-11-17 08:17:54.607939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.628 [2024-11-17 08:17:54.607957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.628 [2024-11-17 08:17:54.607969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:49.628 [2024-11-17 08:17:54.607978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.628 [2024-11-17 08:17:54.632617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.628 [2024-11-17 08:17:54.632651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:49.628 [2024-11-17 08:17:54.632679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.591 ms 00:17:49.628 [2024-11-17 08:17:54.632689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.888 [2024-11-17 08:17:54.658372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.888 [2024-11-17 08:17:54.658416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:49.888 [2024-11-17 08:17:54.658445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.642 ms 00:17:49.888 [2024-11-17 08:17:54.658454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.888 [2024-11-17 08:17:54.682400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.888 [2024-11-17 08:17:54.682436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.888 [2024-11-17 08:17:54.682464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.904 ms 00:17:49.888 [2024-11-17 08:17:54.682474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.888 [2024-11-17 08:17:54.706218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.888 [2024-11-17 08:17:54.706253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.888 [2024-11-17 08:17:54.706282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.677 ms 00:17:49.888 [2024-11-17 08:17:54.706291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.888 [2024-11-17 08:17:54.706333] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.888 [2024-11-17 08:17:54.706359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.888 [2024-11-17 08:17:54.706635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.706992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.889 [2024-11-17 08:17:54.707499] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.889 [2024-11-17 08:17:54.707509] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6c485b1-efda-4eee-863c-5f40f9b95397 00:17:49.889 [2024-11-17 08:17:54.707520] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.889 [2024-11-17 08:17:54.707530] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.889 [2024-11-17 08:17:54.707539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.889 [2024-11-17 08:17:54.707550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.889 [2024-11-17 08:17:54.707559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.889 [2024-11-17 08:17:54.707569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.889 [2024-11-17 08:17:54.707579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.889 [2024-11-17 08:17:54.707588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.889 [2024-11-17 08:17:54.707598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.889 [2024-11-17 08:17:54.707607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.889 [2024-11-17 08:17:54.707617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.889 [2024-11-17 08:17:54.707634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:17:49.889 [2024-11-17 08:17:54.707644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.889 [2024-11-17 08:17:54.720995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.890 [2024-11-17 08:17:54.721028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.890 [2024-11-17 08:17:54.721057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.308 ms 00:17:49.890 [2024-11-17 08:17:54.721067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.890 [2024-11-17 08:17:54.721568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.890 [2024-11-17 08:17:54.721607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.890 [2024-11-17 08:17:54.721620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:17:49.890 [2024-11-17 08:17:54.721630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.890 [2024-11-17 08:17:54.758905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.890 [2024-11-17 08:17:54.758946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.890 [2024-11-17 08:17:54.758975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.890 [2024-11-17 08:17:54.758985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.890 [2024-11-17 08:17:54.759063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.890 [2024-11-17 08:17:54.759081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.890 [2024-11-17 08:17:54.759117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.890 [2024-11-17 08:17:54.759129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.890 [2024-11-17 08:17:54.759187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.890 [2024-11-17 08:17:54.759204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.890 [2024-11-17 08:17:54.759231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.890 [2024-11-17 08:17:54.759256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.890 [2024-11-17 08:17:54.759295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.890 [2024-11-17 08:17:54.759307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.890 [2024-11-17 08:17:54.759324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.890 [2024-11-17 08:17:54.759334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.890 [2024-11-17 08:17:54.837957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.890 [2024-11-17 08:17:54.838012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.890 [2024-11-17 08:17:54.838043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.890 [2024-11-17 08:17:54.838053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.149 [2024-11-17 08:17:54.904301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.149 [2024-11-17 08:17:54.904353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.149 [2024-11-17 08:17:54.904390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.149 [2024-11-17 08:17:54.904400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.149 [2024-11-17 08:17:54.904474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.149 [2024-11-17 08:17:54.904490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.149 [2024-11-17 08:17:54.904500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.149 [2024-11-17 08:17:54.904509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.149 [2024-11-17 08:17:54.904539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.149 [2024-11-17 08:17:54.904551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.149 [2024-11-17 08:17:54.904561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.149 [2024-11-17 08:17:54.904575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.149 [2024-11-17 08:17:54.904746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.149 [2024-11-17 08:17:54.904763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.149 [2024-11-17 08:17:54.904775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.149 [2024-11-17 08:17:54.904785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.149 [2024-11-17 08:17:54.904832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.149 [2024-11-17 08:17:54.904848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:50.149 [2024-11-17 08:17:54.904859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.149 [2024-11-17 08:17:54.904869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.150 [2024-11-17 08:17:54.904918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.150 [2024-11-17 08:17:54.904933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.150 [2024-11-17 08:17:54.904944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.150 [2024-11-17 08:17:54.904965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.150 [2024-11-17 08:17:54.905021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.150 [2024-11-17 08:17:54.905036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.150 [2024-11-17 08:17:54.905047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.150 [2024-11-17 08:17:54.905062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.150 [2024-11-17 08:17:54.905231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.739 ms, result 0 00:17:51.088 00:17:51.088 00:17:51.088 08:17:55 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:51.088 08:17:55 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75038 00:17:51.088 08:17:55 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75038 00:17:51.088 08:17:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 75038 ']' 00:17:51.088 08:17:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.088 08:17:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.088 08:17:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.088 08:17:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.088 08:17:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:51.088 [2024-11-17 08:17:55.906219] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:51.088 [2024-11-17 08:17:55.906381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75038 ] 00:17:51.088 [2024-11-17 08:17:56.059888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.347 [2024-11-17 08:17:56.141811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.916 08:17:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.916 08:17:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:51.916 08:17:56 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:52.175 [2024-11-17 08:17:57.126364] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:52.175 [2024-11-17 08:17:57.126443] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:52.436 [2024-11-17 08:17:57.285069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.285127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:52.436 [2024-11-17 08:17:57.285167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:52.436 [2024-11-17 08:17:57.285179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.288899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.288953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.436 [2024-11-17 08:17:57.288986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.695 ms 00:17:52.436 [2024-11-17 08:17:57.288996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.289184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:52.436 [2024-11-17 08:17:57.290125] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:52.436 [2024-11-17 08:17:57.290198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.290212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.436 [2024-11-17 08:17:57.290226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:17:52.436 [2024-11-17 08:17:57.290237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.291529] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:52.436 [2024-11-17 08:17:57.307321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.307428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:52.436 [2024-11-17 08:17:57.307464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.797 ms 00:17:52.436 [2024-11-17 08:17:57.307483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.307614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.307669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:52.436 [2024-11-17 08:17:57.307693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:52.436 [2024-11-17 08:17:57.307710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.312352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.312455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.436 [2024-11-17 08:17:57.312500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.558 ms 00:17:52.436 [2024-11-17 08:17:57.312515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.312667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.312709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.436 [2024-11-17 08:17:57.312739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:52.436 [2024-11-17 08:17:57.312755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.312804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.312823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:52.436 [2024-11-17 08:17:57.312835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:52.436 [2024-11-17 08:17:57.312850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.312883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:52.436 [2024-11-17 08:17:57.316883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.316916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.436 [2024-11-17 08:17:57.316951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.004 ms 00:17:52.436 [2024-11-17 08:17:57.316963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.436 [2024-11-17 08:17:57.317030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.436 [2024-11-17 08:17:57.317047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:52.437 [2024-11-17 08:17:57.317063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:52.437 [2024-11-17 08:17:57.317106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.437 [2024-11-17 08:17:57.317145] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:52.437 [2024-11-17 08:17:57.317204] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:52.437 [2024-11-17 08:17:57.317297] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:52.437 [2024-11-17 08:17:57.317323] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:52.437 [2024-11-17 08:17:57.317497] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:52.437 [2024-11-17 08:17:57.317519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:52.437 [2024-11-17 08:17:57.317542] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:52.437 [2024-11-17 08:17:57.317562] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:52.437 [2024-11-17 08:17:57.317580] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:52.437 [2024-11-17 08:17:57.317592] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:52.437 [2024-11-17 08:17:57.317608] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:52.437 [2024-11-17 08:17:57.317618] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:52.437 [2024-11-17 08:17:57.317637] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:52.437 [2024-11-17 08:17:57.317649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.437 [2024-11-17 08:17:57.317665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:52.437 [2024-11-17 08:17:57.317677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:17:52.437 [2024-11-17 08:17:57.317692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.437 [2024-11-17 08:17:57.317822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.437 [2024-11-17 08:17:57.317855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:52.437 [2024-11-17 08:17:57.317869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:52.437 [2024-11-17 08:17:57.317886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.437 [2024-11-17 08:17:57.318005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:52.437 [2024-11-17 08:17:57.318051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:52.437 [2024-11-17 08:17:57.318065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:52.437 [2024-11-17 08:17:57.318180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:52.437 [2024-11-17 08:17:57.318230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.437 [2024-11-17 08:17:57.318259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:52.437 [2024-11-17 08:17:57.318276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:52.437 [2024-11-17 08:17:57.318288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.437 [2024-11-17 08:17:57.318310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:52.437 [2024-11-17 08:17:57.318323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:52.437 [2024-11-17 08:17:57.318341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:52.437 [2024-11-17 08:17:57.318370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:52.437 [2024-11-17 08:17:57.318425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:52.437 [2024-11-17 08:17:57.318519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:52.437 [2024-11-17 08:17:57.318556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:52.437 [2024-11-17 08:17:57.318597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:52.437 [2024-11-17 08:17:57.318635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.437 [2024-11-17 08:17:57.318661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:52.437 [2024-11-17 08:17:57.318676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:52.437 [2024-11-17 08:17:57.318687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.437 [2024-11-17 08:17:57.318702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:52.437 [2024-11-17 08:17:57.318713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:52.437 [2024-11-17 08:17:57.318732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:52.437 [2024-11-17 08:17:57.318758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:52.437 [2024-11-17 08:17:57.318769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318783] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:52.437 [2024-11-17 08:17:57.318795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:52.437 [2024-11-17 08:17:57.318817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.437 [2024-11-17 08:17:57.318845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:52.437 [2024-11-17 08:17:57.318857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:52.437 [2024-11-17 08:17:57.318871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:52.437 [2024-11-17 08:17:57.318882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:52.437 [2024-11-17 08:17:57.318897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:52.437 [2024-11-17 08:17:57.318908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:52.437 [2024-11-17 08:17:57.318927] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:52.437 [2024-11-17 08:17:57.318942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.437 [2024-11-17 08:17:57.318964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:52.437 [2024-11-17 08:17:57.318976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:52.437 [2024-11-17 08:17:57.318992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:52.437 [2024-11-17 08:17:57.319004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:52.437 [2024-11-17 08:17:57.319020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:52.437 [2024-11-17 08:17:57.319031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:52.437 [2024-11-17 08:17:57.319047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:52.437 [2024-11-17 08:17:57.319059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:52.437 [2024-11-17 08:17:57.319075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:52.437 [2024-11-17 08:17:57.319133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:52.437 [2024-11-17 08:17:57.319152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:52.437 [2024-11-17 08:17:57.319167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:52.437 [2024-11-17 08:17:57.319185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:52.437 [2024-11-17 08:17:57.319199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:52.437 [2024-11-17 08:17:57.319217] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:52.437 [2024-11-17 08:17:57.319232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.437 [2024-11-17 08:17:57.319254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:52.437 [2024-11-17 08:17:57.319268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:52.437 [2024-11-17 08:17:57.319286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:52.438 [2024-11-17 08:17:57.319299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:52.438 [2024-11-17 08:17:57.319318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.319331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:52.438 [2024-11-17 08:17:57.319349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:17:52.438 [2024-11-17 08:17:57.319373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.349563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.349612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.438 [2024-11-17 08:17:57.349651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.091 ms 00:17:52.438 [2024-11-17 08:17:57.349663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.349832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.349849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:52.438 [2024-11-17 08:17:57.349897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:52.438 [2024-11-17 08:17:57.349924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.383598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.383644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.438 [2024-11-17 08:17:57.383689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.639 ms 00:17:52.438 [2024-11-17 08:17:57.383700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.383822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.383839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.438 [2024-11-17 08:17:57.383855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.438 [2024-11-17 08:17:57.383881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.384259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.384289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.438 [2024-11-17 08:17:57.384316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:17:52.438 [2024-11-17 08:17:57.384328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.384507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.384524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.438 [2024-11-17 08:17:57.384541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:52.438 [2024-11-17 08:17:57.384552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.400570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.400622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.438 [2024-11-17 08:17:57.400659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.984 ms 00:17:52.438 [2024-11-17 08:17:57.400671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.413794] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:52.438 [2024-11-17 08:17:57.413830] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:52.438 [2024-11-17 08:17:57.413868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.413881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:52.438 [2024-11-17 08:17:57.413897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.065 ms 00:17:52.438 [2024-11-17 08:17:57.413908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.438 [2024-11-17 08:17:57.437342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.438 [2024-11-17 08:17:57.437379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:52.438 [2024-11-17 08:17:57.437417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.344 ms 00:17:52.438 [2024-11-17 08:17:57.437429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.697 [2024-11-17 08:17:57.450848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.697 [2024-11-17 08:17:57.450900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:52.697 [2024-11-17 08:17:57.450925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.321 ms 00:17:52.697 [2024-11-17 08:17:57.450951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.697 [2024-11-17 08:17:57.463631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.697 [2024-11-17 08:17:57.463668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:52.697 [2024-11-17 08:17:57.463718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.592 ms 00:17:52.697 [2024-11-17 08:17:57.463729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.697 [2024-11-17 08:17:57.464578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.697 [2024-11-17 08:17:57.464608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:52.697 [2024-11-17 08:17:57.464642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:17:52.697 [2024-11-17 08:17:57.464653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.697 [2024-11-17 08:17:57.528598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.697 [2024-11-17 08:17:57.528664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:52.697 [2024-11-17 08:17:57.528705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.907 ms 00:17:52.697 [2024-11-17 08:17:57.528717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.697 [2024-11-17 08:17:57.538707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:52.697 [2024-11-17 08:17:57.549985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.697 [2024-11-17 08:17:57.550071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:52.697 [2024-11-17 08:17:57.550107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.139 ms 00:17:52.697 [2024-11-17 08:17:57.550125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.698 [2024-11-17 08:17:57.550251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.698 [2024-11-17 08:17:57.550275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:52.698 [2024-11-17 08:17:57.550288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:52.698 [2024-11-17 08:17:57.550303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.698 [2024-11-17 08:17:57.550409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.698 [2024-11-17 08:17:57.550431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:52.698 [2024-11-17 08:17:57.550444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:52.698 [2024-11-17 08:17:57.550461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.698 [2024-11-17 08:17:57.550497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.698 [2024-11-17 08:17:57.550516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:52.698 [2024-11-17 08:17:57.550528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:52.698 [2024-11-17 08:17:57.550547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.698 [2024-11-17 08:17:57.550595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:52.698 [2024-11-17 08:17:57.550625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.698 [2024-11-17 08:17:57.550637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:52.698 [2024-11-17 08:17:57.550661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:52.698 [2024-11-17 08:17:57.550672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.698 [2024-11-17 08:17:57.575782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.698 [2024-11-17 08:17:57.575820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:52.698 [2024-11-17 08:17:57.575858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.067 ms 00:17:52.698 [2024-11-17 08:17:57.575869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.698 [2024-11-17 08:17:57.575980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.698 [2024-11-17 08:17:57.575999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:52.698 [2024-11-17 08:17:57.576016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:52.698 [2024-11-17 08:17:57.576032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.698 [2024-11-17 08:17:57.577445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:52.698 [2024-11-17 08:17:57.580918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.907 ms, result 0 00:17:52.698 [2024-11-17 08:17:57.582301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.698 Some configs were skipped because the RPC state that can call them passed over. 00:17:52.698 08:17:57 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:52.957 [2024-11-17 08:17:57.821479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.957 [2024-11-17 08:17:57.821556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:52.957 [2024-11-17 08:17:57.821574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:17:52.957 [2024-11-17 08:17:57.821591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.957 [2024-11-17 08:17:57.821641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.635 ms, result 0 00:17:52.957 true 00:17:52.957 08:17:57 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:53.216 [2024-11-17 08:17:58.053694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.216 [2024-11-17 08:17:58.053763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:53.216 [2024-11-17 08:17:58.053819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:17:53.216 [2024-11-17 08:17:58.053833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.216 [2024-11-17 08:17:58.053921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.510 ms, result 0 00:17:53.216 true 00:17:53.216 08:17:58 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75038 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 75038 ']' 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 75038 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75038 00:17:53.216 killing process with pid 75038 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75038' 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 75038 00:17:53.216 08:17:58 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 75038 00:17:54.157 [2024-11-17 08:17:58.827531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.827633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:54.157 [2024-11-17 08:17:58.827653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.157 [2024-11-17 08:17:58.827665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.827708] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:54.157 [2024-11-17 08:17:58.830482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.830511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:54.157 [2024-11-17 08:17:58.830542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:17:54.157 [2024-11-17 08:17:58.830568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.830875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.830916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:54.157 [2024-11-17 08:17:58.830931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:54.157 [2024-11-17 08:17:58.830942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.834461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.834517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:54.157 [2024-11-17 08:17:58.834537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.491 ms 00:17:54.157 [2024-11-17 08:17:58.834549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.840483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.840515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:54.157 [2024-11-17 08:17:58.840545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.857 ms 00:17:54.157 [2024-11-17 08:17:58.840556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.850702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.850736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:54.157 [2024-11-17 08:17:58.850769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.089 ms 00:17:54.157 [2024-11-17 08:17:58.850788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.858458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.858493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:54.157 [2024-11-17 08:17:58.858526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.625 ms 00:17:54.157 [2024-11-17 08:17:58.858536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.858682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.858700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:54.157 [2024-11-17 08:17:58.858713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:54.157 [2024-11-17 08:17:58.858722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.869486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.869519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:54.157 [2024-11-17 08:17:58.869549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.708 ms 00:17:54.157 [2024-11-17 08:17:58.869559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.879930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.879963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:54.157 [2024-11-17 08:17:58.880003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.306 ms 00:17:54.157 [2024-11-17 08:17:58.880014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.889957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.889990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:54.157 [2024-11-17 08:17:58.890027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.879 ms 00:17:54.157 [2024-11-17 08:17:58.890038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.900213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.157 [2024-11-17 08:17:58.900260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:54.157 [2024-11-17 08:17:58.900294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.069 ms 00:17:54.157 [2024-11-17 08:17:58.900306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.157 [2024-11-17 08:17:58.900386] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:54.157 [2024-11-17 08:17:58.900408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.900984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:54.157 [2024-11-17 08:17:58.901141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.901979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:54.158 [2024-11-17 08:17:58.902000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:54.158 [2024-11-17 08:17:58.902028] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6c485b1-efda-4eee-863c-5f40f9b95397 00:17:54.158 [2024-11-17 08:17:58.902053] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:54.158 [2024-11-17 08:17:58.902103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:54.158 [2024-11-17 08:17:58.902117] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:54.158 [2024-11-17 08:17:58.902149] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:54.158 [2024-11-17 08:17:58.902161] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:54.158 [2024-11-17 08:17:58.902178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:54.158 [2024-11-17 08:17:58.902190] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:54.158 [2024-11-17 08:17:58.902206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:54.158 [2024-11-17 08:17:58.902217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:54.158 [2024-11-17 08:17:58.902234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.158 [2024-11-17 08:17:58.902247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:54.158 [2024-11-17 08:17:58.902264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:17:54.158 [2024-11-17 08:17:58.902276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.158 [2024-11-17 08:17:58.916903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.158 [2024-11-17 08:17:58.916954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:54.158 [2024-11-17 08:17:58.917009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.568 ms 00:17:54.158 [2024-11-17 08:17:58.917021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.158 [2024-11-17 08:17:58.917539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.158 [2024-11-17 08:17:58.917585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:54.158 [2024-11-17 08:17:58.917605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:17:54.158 [2024-11-17 08:17:58.917622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.158 [2024-11-17 08:17:58.964317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.158 [2024-11-17 08:17:58.964358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.158 [2024-11-17 08:17:58.964393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.158 [2024-11-17 08:17:58.964405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.158 [2024-11-17 08:17:58.964533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.158 [2024-11-17 08:17:58.964551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.158 [2024-11-17 08:17:58.964567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.158 [2024-11-17 08:17:58.964583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.158 [2024-11-17 08:17:58.964694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.158 [2024-11-17 08:17:58.964712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.158 [2024-11-17 08:17:58.964733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.158 [2024-11-17 08:17:58.964745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.158 [2024-11-17 08:17:58.964775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.158 [2024-11-17 08:17:58.964789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.158 [2024-11-17 08:17:58.964805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.158 [2024-11-17 08:17:58.964817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.158 [2024-11-17 08:17:59.044377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.044435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.159 [2024-11-17 08:17:59.044469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.044479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.116486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.116555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.159 [2024-11-17 08:17:59.116594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.116612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.116747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.116765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.159 [2024-11-17 08:17:59.116787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.116830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.116887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.116902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.159 [2024-11-17 08:17:59.116918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.116931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.117064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.117124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.159 [2024-11-17 08:17:59.117147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.117160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.117238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.117257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:54.159 [2024-11-17 08:17:59.117275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.117287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.117340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.117360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.159 [2024-11-17 08:17:59.117381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.117393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.117454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.159 [2024-11-17 08:17:59.117471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.159 [2024-11-17 08:17:59.117503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.159 [2024-11-17 08:17:59.117515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.159 [2024-11-17 08:17:59.117687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 290.121 ms, result 0 00:17:55.094 08:17:59 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:55.094 08:17:59 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:55.094 [2024-11-17 08:17:59.911125] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:55.094 [2024-11-17 08:17:59.911288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75091 ] 00:17:55.094 [2024-11-17 08:18:00.073346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.353 [2024-11-17 08:18:00.156978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.612 [2024-11-17 08:18:00.437185] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.612 [2024-11-17 08:18:00.437297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:55.612 [2024-11-17 08:18:00.594870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.612 [2024-11-17 08:18:00.594932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.612 [2024-11-17 08:18:00.594964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.612 [2024-11-17 08:18:00.594975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.612 [2024-11-17 08:18:00.597866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.612 [2024-11-17 08:18:00.597919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.612 [2024-11-17 08:18:00.597949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.865 ms 00:17:55.612 [2024-11-17 08:18:00.597958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.612 [2024-11-17 08:18:00.598103] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.613 [2024-11-17 08:18:00.599003] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.613 [2024-11-17 08:18:00.599055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.613 [2024-11-17 08:18:00.599098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.613 [2024-11-17 08:18:00.599134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:17:55.613 [2024-11-17 08:18:00.599152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.613 [2024-11-17 08:18:00.600540] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:55.613 [2024-11-17 08:18:00.614259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.613 [2024-11-17 08:18:00.614323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:55.613 [2024-11-17 08:18:00.614353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.721 ms 00:17:55.613 [2024-11-17 08:18:00.614363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.613 [2024-11-17 08:18:00.614476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.613 [2024-11-17 08:18:00.614496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:55.613 [2024-11-17 08:18:00.614507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:55.613 [2024-11-17 08:18:00.614517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.613 [2024-11-17 08:18:00.618897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.613 [2024-11-17 08:18:00.618952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.613 [2024-11-17 08:18:00.618982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:17:55.613 [2024-11-17 08:18:00.618992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.613 [2024-11-17 08:18:00.619128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.613 [2024-11-17 08:18:00.619148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.613 [2024-11-17 08:18:00.619160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:55.613 [2024-11-17 08:18:00.619171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.613 [2024-11-17 08:18:00.619205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.613 [2024-11-17 08:18:00.619240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.613 [2024-11-17 08:18:00.619267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:55.613 [2024-11-17 08:18:00.619292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.613 [2024-11-17 08:18:00.619353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:55.873 [2024-11-17 08:18:00.623447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.873 [2024-11-17 08:18:00.623515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.873 [2024-11-17 08:18:00.623531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.103 ms 00:17:55.873 [2024-11-17 08:18:00.623542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.873 [2024-11-17 08:18:00.623609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.873 [2024-11-17 08:18:00.623628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.873 [2024-11-17 08:18:00.623656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:55.873 [2024-11-17 08:18:00.623666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.873 [2024-11-17 08:18:00.623734] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:55.873 [2024-11-17 08:18:00.623764] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:55.873 [2024-11-17 08:18:00.623864] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:55.873 [2024-11-17 08:18:00.623883] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:55.873 [2024-11-17 08:18:00.624023] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.873 [2024-11-17 08:18:00.624038] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.873 [2024-11-17 08:18:00.624053] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.873 [2024-11-17 08:18:00.624068] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.873 [2024-11-17 08:18:00.624087] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.873 [2024-11-17 08:18:00.624099] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:55.873 [2024-11-17 08:18:00.624110] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.873 [2024-11-17 08:18:00.624121] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.873 [2024-11-17 08:18:00.624132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.873 [2024-11-17 08:18:00.624144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.873 [2024-11-17 08:18:00.624156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.873 [2024-11-17 08:18:00.624168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:17:55.873 [2024-11-17 08:18:00.624179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.873 [2024-11-17 08:18:00.624294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.874 [2024-11-17 08:18:00.624341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.874 [2024-11-17 08:18:00.624358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:55.874 [2024-11-17 08:18:00.624368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.874 [2024-11-17 08:18:00.624495] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.874 [2024-11-17 08:18:00.624523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.874 [2024-11-17 08:18:00.624537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.874 [2024-11-17 08:18:00.624549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.874 [2024-11-17 08:18:00.624569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:55.874 [2024-11-17 08:18:00.624589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.874 [2024-11-17 08:18:00.624599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.874 [2024-11-17 08:18:00.624619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.874 [2024-11-17 08:18:00.624630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:55.874 [2024-11-17 08:18:00.624640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.874 [2024-11-17 08:18:00.624663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.874 [2024-11-17 08:18:00.624674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:55.874 [2024-11-17 08:18:00.624684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.874 [2024-11-17 08:18:00.624706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:55.874 [2024-11-17 08:18:00.624717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.874 [2024-11-17 08:18:00.624737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.874 [2024-11-17 08:18:00.624757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.874 [2024-11-17 08:18:00.624781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.874 [2024-11-17 08:18:00.624800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.874 [2024-11-17 08:18:00.624809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.874 [2024-11-17 08:18:00.624828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.874 [2024-11-17 08:18:00.624838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.874 [2024-11-17 08:18:00.624856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.874 [2024-11-17 08:18:00.624865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.874 [2024-11-17 08:18:00.624884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.874 [2024-11-17 08:18:00.624893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:55.874 [2024-11-17 08:18:00.624902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.874 [2024-11-17 08:18:00.624912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.874 [2024-11-17 08:18:00.624921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:55.874 [2024-11-17 08:18:00.624930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.874 [2024-11-17 08:18:00.624949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:55.874 [2024-11-17 08:18:00.624959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.874 [2024-11-17 08:18:00.624968] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.874 [2024-11-17 08:18:00.624978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.874 [2024-11-17 08:18:00.624989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.874 [2024-11-17 08:18:00.625003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.874 [2024-11-17 08:18:00.625013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.874 [2024-11-17 08:18:00.625024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.874 [2024-11-17 08:18:00.625033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.874 [2024-11-17 08:18:00.625043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.874 [2024-11-17 08:18:00.625052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.874 [2024-11-17 08:18:00.625062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.874 [2024-11-17 08:18:00.625073] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.874 [2024-11-17 08:18:00.625103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.874 [2024-11-17 08:18:00.625138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:55.874 [2024-11-17 08:18:00.625155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:55.874 [2024-11-17 08:18:00.625166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:55.874 [2024-11-17 08:18:00.625177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:55.874 [2024-11-17 08:18:00.625187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:55.874 [2024-11-17 08:18:00.625214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:55.874 [2024-11-17 08:18:00.625225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:55.874 [2024-11-17 08:18:00.625235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:55.874 [2024-11-17 08:18:00.625246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:55.874 [2024-11-17 08:18:00.625273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:55.874 [2024-11-17 08:18:00.625284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:55.874 [2024-11-17 08:18:00.625296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:55.874 [2024-11-17 08:18:00.625307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:55.874 [2024-11-17 08:18:00.625318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:55.874 [2024-11-17 08:18:00.625329] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.874 [2024-11-17 08:18:00.625342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.874 [2024-11-17 08:18:00.625355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.874 [2024-11-17 08:18:00.625367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.874 [2024-11-17 08:18:00.625379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.874 [2024-11-17 08:18:00.625390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.874 [2024-11-17 08:18:00.625403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.874 [2024-11-17 08:18:00.625430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.874 [2024-11-17 08:18:00.625448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:17:55.874 [2024-11-17 08:18:00.625459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.874 [2024-11-17 08:18:00.654172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.874 [2024-11-17 08:18:00.654239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.874 [2024-11-17 08:18:00.654272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.641 ms 00:17:55.874 [2024-11-17 08:18:00.654282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.874 [2024-11-17 08:18:00.654438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.874 [2024-11-17 08:18:00.654462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:55.874 [2024-11-17 08:18:00.654475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:55.874 [2024-11-17 08:18:00.654485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.874 [2024-11-17 08:18:00.695479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.874 [2024-11-17 08:18:00.695527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.874 [2024-11-17 08:18:00.695559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.935 ms 00:17:55.874 [2024-11-17 08:18:00.695582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.874 [2024-11-17 08:18:00.695746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.874 [2024-11-17 08:18:00.695779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.874 [2024-11-17 08:18:00.695807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.874 [2024-11-17 08:18:00.695817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.696152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.696195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.875 [2024-11-17 08:18:00.696209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:55.875 [2024-11-17 08:18:00.696225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.696371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.696397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.875 [2024-11-17 08:18:00.696410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:55.875 [2024-11-17 08:18:00.696420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.710726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.710777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.875 [2024-11-17 08:18:00.710807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.278 ms 00:17:55.875 [2024-11-17 08:18:00.710818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.724192] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:55.875 [2024-11-17 08:18:00.724244] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:55.875 [2024-11-17 08:18:00.724276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.724287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:55.875 [2024-11-17 08:18:00.724299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.323 ms 00:17:55.875 [2024-11-17 08:18:00.724309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.748168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.748230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:55.875 [2024-11-17 08:18:00.748260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.772 ms 00:17:55.875 [2024-11-17 08:18:00.748270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.761188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.761238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:55.875 [2024-11-17 08:18:00.761267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.831 ms 00:17:55.875 [2024-11-17 08:18:00.761276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.773943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.773992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:55.875 [2024-11-17 08:18:00.774020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.584 ms 00:17:55.875 [2024-11-17 08:18:00.774029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.774851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.774913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:55.875 [2024-11-17 08:18:00.774941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:17:55.875 [2024-11-17 08:18:00.774951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.833826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.833907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:55.875 [2024-11-17 08:18:00.833926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.844 ms 00:17:55.875 [2024-11-17 08:18:00.833937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.844188] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:55.875 [2024-11-17 08:18:00.855937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.856006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:55.875 [2024-11-17 08:18:00.856038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.806 ms 00:17:55.875 [2024-11-17 08:18:00.856048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.856187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.856207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:55.875 [2024-11-17 08:18:00.856219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:55.875 [2024-11-17 08:18:00.856229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.856288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.856318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:55.875 [2024-11-17 08:18:00.856345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:55.875 [2024-11-17 08:18:00.856370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.856408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.856426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:55.875 [2024-11-17 08:18:00.856438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:55.875 [2024-11-17 08:18:00.856448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.856485] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:55.875 [2024-11-17 08:18:00.856501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.856511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:55.875 [2024-11-17 08:18:00.856522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:55.875 [2024-11-17 08:18:00.856532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.875 [2024-11-17 08:18:00.882999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.875 [2024-11-17 08:18:00.883056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:55.875 [2024-11-17 08:18:00.883087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.438 ms 00:17:55.875 [2024-11-17 08:18:00.883111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.135 [2024-11-17 08:18:00.883271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.135 [2024-11-17 08:18:00.883291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.135 [2024-11-17 08:18:00.883334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:56.135 [2024-11-17 08:18:00.883389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.135 [2024-11-17 08:18:00.884433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.135 [2024-11-17 08:18:00.888437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.190 ms, result 0 00:17:56.135 [2024-11-17 08:18:00.889351] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.135 [2024-11-17 08:18:00.904365] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:57.072  [2024-11-17T08:18:03.064Z] Copying: 24/256 [MB] (24 MBps) [2024-11-17T08:18:04.001Z] Copying: 45/256 [MB] (21 MBps) [2024-11-17T08:18:04.939Z] Copying: 67/256 [MB] (21 MBps) [2024-11-17T08:18:06.319Z] Copying: 88/256 [MB] (21 MBps) [2024-11-17T08:18:07.257Z] Copying: 108/256 [MB] (20 MBps) [2024-11-17T08:18:08.194Z] Copying: 128/256 [MB] (20 MBps) [2024-11-17T08:18:09.130Z] Copying: 149/256 [MB] (20 MBps) [2024-11-17T08:18:10.068Z] Copying: 169/256 [MB] (19 MBps) [2024-11-17T08:18:11.005Z] Copying: 189/256 [MB] (20 MBps) [2024-11-17T08:18:11.943Z] Copying: 209/256 [MB] (20 MBps) [2024-11-17T08:18:13.324Z] Copying: 230/256 [MB] (20 MBps) [2024-11-17T08:18:13.324Z] Copying: 251/256 [MB] (20 MBps) [2024-11-17T08:18:13.324Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-17 08:18:13.119565] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:08.312 [2024-11-17 08:18:13.129387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.129434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:08.312 [2024-11-17 08:18:13.129462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:08.312 [2024-11-17 08:18:13.129501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.129595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:08.312 [2024-11-17 08:18:13.132360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.132398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:08.312 [2024-11-17 08:18:13.132422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.733 ms 00:18:08.312 [2024-11-17 08:18:13.132438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.132818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.132855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:08.312 [2024-11-17 08:18:13.132878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:18:08.312 [2024-11-17 08:18:13.132897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.136043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.136100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:08.312 [2024-11-17 08:18:13.136131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:18:08.312 [2024-11-17 08:18:13.136149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.141984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.142022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:08.312 [2024-11-17 08:18:13.142051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.753 ms 00:18:08.312 [2024-11-17 08:18:13.142075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.166285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.166326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:08.312 [2024-11-17 08:18:13.166367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.996 ms 00:18:08.312 [2024-11-17 08:18:13.166385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.181182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.181236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:08.312 [2024-11-17 08:18:13.181260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.659 ms 00:18:08.312 [2024-11-17 08:18:13.181293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.181511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.181555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:08.312 [2024-11-17 08:18:13.181589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:08.312 [2024-11-17 08:18:13.181609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.206418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.206458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:08.312 [2024-11-17 08:18:13.206481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.744 ms 00:18:08.312 [2024-11-17 08:18:13.206497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.230951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.230993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:08.312 [2024-11-17 08:18:13.231016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.335 ms 00:18:08.312 [2024-11-17 08:18:13.231033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.255453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.255511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:08.312 [2024-11-17 08:18:13.255535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.292 ms 00:18:08.312 [2024-11-17 08:18:13.255552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.279380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.312 [2024-11-17 08:18:13.279421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:08.312 [2024-11-17 08:18:13.279460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.650 ms 00:18:08.312 [2024-11-17 08:18:13.279478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.312 [2024-11-17 08:18:13.279598] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:08.312 [2024-11-17 08:18:13.279631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:08.312 [2024-11-17 08:18:13.279872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.279891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.279919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.279941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.279956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.279969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.279982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.279995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.280977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:08.313 [2024-11-17 08:18:13.281632] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:08.313 [2024-11-17 08:18:13.281654] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6c485b1-efda-4eee-863c-5f40f9b95397 00:18:08.313 [2024-11-17 08:18:13.281671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:08.314 [2024-11-17 08:18:13.281688] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:08.314 [2024-11-17 08:18:13.281705] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:08.314 [2024-11-17 08:18:13.281723] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:08.314 [2024-11-17 08:18:13.281740] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:08.314 [2024-11-17 08:18:13.281758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:08.314 [2024-11-17 08:18:13.281777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:08.314 [2024-11-17 08:18:13.281794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:08.314 [2024-11-17 08:18:13.281811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:08.314 [2024-11-17 08:18:13.281836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.314 [2024-11-17 08:18:13.281869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:08.314 [2024-11-17 08:18:13.281884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:18:08.314 [2024-11-17 08:18:13.281896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.314 [2024-11-17 08:18:13.295429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.314 [2024-11-17 08:18:13.295467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:08.314 [2024-11-17 08:18:13.295506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.483 ms 00:18:08.314 [2024-11-17 08:18:13.295532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.314 [2024-11-17 08:18:13.296031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.314 [2024-11-17 08:18:13.296075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:08.314 [2024-11-17 08:18:13.296143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:08.314 [2024-11-17 08:18:13.296170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.333442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.333486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.574 [2024-11-17 08:18:13.333515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.333531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.333693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.333719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.574 [2024-11-17 08:18:13.333739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.333760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.333856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.333887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.574 [2024-11-17 08:18:13.333916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.333932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.333964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.333988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.574 [2024-11-17 08:18:13.334007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.334019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.412168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.412229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.574 [2024-11-17 08:18:13.412253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.412269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.477327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.477383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.574 [2024-11-17 08:18:13.477424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.477442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.477599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.477626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.574 [2024-11-17 08:18:13.477646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.477666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.477716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.477748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.574 [2024-11-17 08:18:13.477779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.477799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.477954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.477994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.574 [2024-11-17 08:18:13.478015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.478033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.478150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.478185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:08.574 [2024-11-17 08:18:13.478209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.478241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.478312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.478341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.574 [2024-11-17 08:18:13.478363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.478383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.478503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.574 [2024-11-17 08:18:13.478536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.574 [2024-11-17 08:18:13.478567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.574 [2024-11-17 08:18:13.478583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.574 [2024-11-17 08:18:13.478796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.385 ms, result 0 00:18:09.513 00:18:09.513 00:18:09.513 08:18:14 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:09.513 08:18:14 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:09.772 08:18:14 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:09.772 [2024-11-17 08:18:14.710743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:09.772 [2024-11-17 08:18:14.710906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75240 ] 00:18:10.031 [2024-11-17 08:18:14.877738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.031 [2024-11-17 08:18:14.959896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.291 [2024-11-17 08:18:15.212173] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.291 [2024-11-17 08:18:15.212276] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.552 [2024-11-17 08:18:15.369260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.369305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.552 [2024-11-17 08:18:15.369337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.552 [2024-11-17 08:18:15.369347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.372102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.372136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.552 [2024-11-17 08:18:15.372166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:18:10.552 [2024-11-17 08:18:15.372175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.372278] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.552 [2024-11-17 08:18:15.373183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.552 [2024-11-17 08:18:15.373234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.373263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.552 [2024-11-17 08:18:15.373274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:18:10.552 [2024-11-17 08:18:15.373284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.374572] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:10.552 [2024-11-17 08:18:15.387429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.387469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:10.552 [2024-11-17 08:18:15.387499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.858 ms 00:18:10.552 [2024-11-17 08:18:15.387510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.387618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.387641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:10.552 [2024-11-17 08:18:15.387653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:10.552 [2024-11-17 08:18:15.387662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.391840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.391871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.552 [2024-11-17 08:18:15.391900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.075 ms 00:18:10.552 [2024-11-17 08:18:15.391909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.392018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.392036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.552 [2024-11-17 08:18:15.392048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:10.552 [2024-11-17 08:18:15.392057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.392106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.392153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.552 [2024-11-17 08:18:15.392178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:10.552 [2024-11-17 08:18:15.392188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.392222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:10.552 [2024-11-17 08:18:15.395785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.395816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.552 [2024-11-17 08:18:15.395844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.572 ms 00:18:10.552 [2024-11-17 08:18:15.395854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.395908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.395927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.552 [2024-11-17 08:18:15.395939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:10.552 [2024-11-17 08:18:15.395948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.396011] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:10.552 [2024-11-17 08:18:15.396050] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:10.552 [2024-11-17 08:18:15.396153] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:10.552 [2024-11-17 08:18:15.396176] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:10.552 [2024-11-17 08:18:15.396286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.552 [2024-11-17 08:18:15.396307] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.552 [2024-11-17 08:18:15.396320] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:10.552 [2024-11-17 08:18:15.396339] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396352] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396363] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:10.552 [2024-11-17 08:18:15.396373] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.552 [2024-11-17 08:18:15.396382] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.552 [2024-11-17 08:18:15.396391] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.552 [2024-11-17 08:18:15.396403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.396413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.552 [2024-11-17 08:18:15.396424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:18:10.552 [2024-11-17 08:18:15.396433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.396526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.552 [2024-11-17 08:18:15.396545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.552 [2024-11-17 08:18:15.396555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:10.552 [2024-11-17 08:18:15.396565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.552 [2024-11-17 08:18:15.396665] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.552 [2024-11-17 08:18:15.396691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.552 [2024-11-17 08:18:15.396703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.552 [2024-11-17 08:18:15.396733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.552 [2024-11-17 08:18:15.396761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.552 [2024-11-17 08:18:15.396779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.552 [2024-11-17 08:18:15.396788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:10.552 [2024-11-17 08:18:15.396797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.552 [2024-11-17 08:18:15.396819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.552 [2024-11-17 08:18:15.396830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:10.552 [2024-11-17 08:18:15.396840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.552 [2024-11-17 08:18:15.396858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.552 [2024-11-17 08:18:15.396886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.552 [2024-11-17 08:18:15.396913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.552 [2024-11-17 08:18:15.396941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.552 [2024-11-17 08:18:15.396968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:10.552 [2024-11-17 08:18:15.396977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.552 [2024-11-17 08:18:15.396986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.552 [2024-11-17 08:18:15.396995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:10.552 [2024-11-17 08:18:15.397004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.552 [2024-11-17 08:18:15.397013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.552 [2024-11-17 08:18:15.397022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:10.552 [2024-11-17 08:18:15.397031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.552 [2024-11-17 08:18:15.397040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.552 [2024-11-17 08:18:15.397049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:10.553 [2024-11-17 08:18:15.397058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.553 [2024-11-17 08:18:15.397067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.553 [2024-11-17 08:18:15.397077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:10.553 [2024-11-17 08:18:15.397124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.553 [2024-11-17 08:18:15.397134] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.553 [2024-11-17 08:18:15.397145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.553 [2024-11-17 08:18:15.397160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.553 [2024-11-17 08:18:15.397171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.553 [2024-11-17 08:18:15.397182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.553 [2024-11-17 08:18:15.397192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.553 [2024-11-17 08:18:15.397202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.553 [2024-11-17 08:18:15.397212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.553 [2024-11-17 08:18:15.397221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.553 [2024-11-17 08:18:15.397231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.553 [2024-11-17 08:18:15.397242] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.553 [2024-11-17 08:18:15.397255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.553 [2024-11-17 08:18:15.397267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:10.553 [2024-11-17 08:18:15.397278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:10.553 [2024-11-17 08:18:15.397288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:10.553 [2024-11-17 08:18:15.397298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:10.553 [2024-11-17 08:18:15.397308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:10.553 [2024-11-17 08:18:15.397318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:10.553 [2024-11-17 08:18:15.397328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:10.553 [2024-11-17 08:18:15.397339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:10.553 [2024-11-17 08:18:15.397349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:10.553 [2024-11-17 08:18:15.397359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:10.553 [2024-11-17 08:18:15.397370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:10.553 [2024-11-17 08:18:15.397380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:10.553 [2024-11-17 08:18:15.397390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:10.553 [2024-11-17 08:18:15.397401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:10.553 [2024-11-17 08:18:15.397411] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.553 [2024-11-17 08:18:15.397423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.553 [2024-11-17 08:18:15.397434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.553 [2024-11-17 08:18:15.397459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.553 [2024-11-17 08:18:15.397469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.553 [2024-11-17 08:18:15.397479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.553 [2024-11-17 08:18:15.397490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.397505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.553 [2024-11-17 08:18:15.397516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:18:10.553 [2024-11-17 08:18:15.397526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.424065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.424120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.553 [2024-11-17 08:18:15.424153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.460 ms 00:18:10.553 [2024-11-17 08:18:15.424163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.424319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.424336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.553 [2024-11-17 08:18:15.424347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:10.553 [2024-11-17 08:18:15.424356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.463840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.463884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.553 [2024-11-17 08:18:15.463920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.457 ms 00:18:10.553 [2024-11-17 08:18:15.463930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.464057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.464074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.553 [2024-11-17 08:18:15.464085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.553 [2024-11-17 08:18:15.464110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.464464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.464495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.553 [2024-11-17 08:18:15.464508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:10.553 [2024-11-17 08:18:15.464524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.464664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.464688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.553 [2024-11-17 08:18:15.464700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:10.553 [2024-11-17 08:18:15.464710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.478332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.478366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.553 [2024-11-17 08:18:15.478397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.596 ms 00:18:10.553 [2024-11-17 08:18:15.478407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.491333] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:10.553 [2024-11-17 08:18:15.491376] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:10.553 [2024-11-17 08:18:15.491422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.491433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:10.553 [2024-11-17 08:18:15.491444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.894 ms 00:18:10.553 [2024-11-17 08:18:15.491454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.514670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.514714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:10.553 [2024-11-17 08:18:15.514744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.127 ms 00:18:10.553 [2024-11-17 08:18:15.514754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.527204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.527236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:10.553 [2024-11-17 08:18:15.527265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.369 ms 00:18:10.553 [2024-11-17 08:18:15.527274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.539447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.539479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:10.553 [2024-11-17 08:18:15.539508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.095 ms 00:18:10.553 [2024-11-17 08:18:15.539517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.553 [2024-11-17 08:18:15.540338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.553 [2024-11-17 08:18:15.540382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:10.553 [2024-11-17 08:18:15.540410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:18:10.553 [2024-11-17 08:18:15.540419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.598844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.598907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:10.814 [2024-11-17 08:18:15.598939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.393 ms 00:18:10.814 [2024-11-17 08:18:15.598950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.609094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:10.814 [2024-11-17 08:18:15.620343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.620393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:10.814 [2024-11-17 08:18:15.620426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.227 ms 00:18:10.814 [2024-11-17 08:18:15.620441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.620560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.620578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:10.814 [2024-11-17 08:18:15.620589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:10.814 [2024-11-17 08:18:15.620598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.620710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.620746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:10.814 [2024-11-17 08:18:15.620758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:10.814 [2024-11-17 08:18:15.620775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.620813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.620826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:10.814 [2024-11-17 08:18:15.620837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:10.814 [2024-11-17 08:18:15.620847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.620885] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:10.814 [2024-11-17 08:18:15.620900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.620911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:10.814 [2024-11-17 08:18:15.620922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:10.814 [2024-11-17 08:18:15.620932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.645323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.645357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:10.814 [2024-11-17 08:18:15.645388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.363 ms 00:18:10.814 [2024-11-17 08:18:15.645398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.645497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.814 [2024-11-17 08:18:15.645515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:10.814 [2024-11-17 08:18:15.645526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:10.814 [2024-11-17 08:18:15.645535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.814 [2024-11-17 08:18:15.646811] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.814 [2024-11-17 08:18:15.650567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.159 ms, result 0 00:18:10.814 [2024-11-17 08:18:15.651449] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.814 [2024-11-17 08:18:15.665220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:11.075  [2024-11-17T08:18:16.087Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-17 08:18:15.848613] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.075 [2024-11-17 08:18:15.858517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.075 [2024-11-17 08:18:15.858552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:11.075 [2024-11-17 08:18:15.858588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:11.075 [2024-11-17 08:18:15.858598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.075 [2024-11-17 08:18:15.858624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:11.075 [2024-11-17 08:18:15.861456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.075 [2024-11-17 08:18:15.861484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:11.075 [2024-11-17 08:18:15.861511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.814 ms 00:18:11.075 [2024-11-17 08:18:15.861520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.075 [2024-11-17 08:18:15.863163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.075 [2024-11-17 08:18:15.863197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:11.075 [2024-11-17 08:18:15.863225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:18:11.075 [2024-11-17 08:18:15.863235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.075 [2024-11-17 08:18:15.866520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.075 [2024-11-17 08:18:15.866575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:11.075 [2024-11-17 08:18:15.866604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.265 ms 00:18:11.075 [2024-11-17 08:18:15.866614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.075 [2024-11-17 08:18:15.872710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.075 [2024-11-17 08:18:15.872754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:11.075 [2024-11-17 08:18:15.872782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.059 ms 00:18:11.075 [2024-11-17 08:18:15.872791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.075 [2024-11-17 08:18:15.897269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.075 [2024-11-17 08:18:15.897318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:11.075 [2024-11-17 08:18:15.897347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.416 ms 00:18:11.076 [2024-11-17 08:18:15.897356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.076 [2024-11-17 08:18:15.911988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.076 [2024-11-17 08:18:15.912021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:11.076 [2024-11-17 08:18:15.912057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.591 ms 00:18:11.076 [2024-11-17 08:18:15.912067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.076 [2024-11-17 08:18:15.912212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.076 [2024-11-17 08:18:15.912231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:11.076 [2024-11-17 08:18:15.912243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:11.076 [2024-11-17 08:18:15.912283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.076 [2024-11-17 08:18:15.937871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.076 [2024-11-17 08:18:15.937920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:11.076 [2024-11-17 08:18:15.937950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.539 ms 00:18:11.076 [2024-11-17 08:18:15.937959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.076 [2024-11-17 08:18:15.966126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.076 [2024-11-17 08:18:15.966169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:11.076 [2024-11-17 08:18:15.966199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.112 ms 00:18:11.076 [2024-11-17 08:18:15.966209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.076 [2024-11-17 08:18:15.991773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.076 [2024-11-17 08:18:15.991836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:11.076 [2024-11-17 08:18:15.991866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.502 ms 00:18:11.076 [2024-11-17 08:18:15.991875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.076 [2024-11-17 08:18:16.017247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.076 [2024-11-17 08:18:16.017297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:11.076 [2024-11-17 08:18:16.017327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.305 ms 00:18:11.076 [2024-11-17 08:18:16.017336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.076 [2024-11-17 08:18:16.017392] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:11.076 [2024-11-17 08:18:16.017414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.017996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:11.076 [2024-11-17 08:18:16.018169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:11.077 [2024-11-17 08:18:16.018538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:11.077 [2024-11-17 08:18:16.018549] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6c485b1-efda-4eee-863c-5f40f9b95397 00:18:11.077 [2024-11-17 08:18:16.018559] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:11.077 [2024-11-17 08:18:16.018569] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:11.077 [2024-11-17 08:18:16.018579] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:11.077 [2024-11-17 08:18:16.018589] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:11.077 [2024-11-17 08:18:16.018598] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:11.077 [2024-11-17 08:18:16.018608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:11.077 [2024-11-17 08:18:16.018623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:11.077 [2024-11-17 08:18:16.018632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:11.077 [2024-11-17 08:18:16.018641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:11.077 [2024-11-17 08:18:16.018652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.077 [2024-11-17 08:18:16.018662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:11.077 [2024-11-17 08:18:16.018674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:18:11.077 [2024-11-17 08:18:16.018684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.077 [2024-11-17 08:18:16.032696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.077 [2024-11-17 08:18:16.032744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:11.077 [2024-11-17 08:18:16.032773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.988 ms 00:18:11.077 [2024-11-17 08:18:16.032783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.077 [2024-11-17 08:18:16.033305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.077 [2024-11-17 08:18:16.033335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:11.077 [2024-11-17 08:18:16.033348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:18:11.077 [2024-11-17 08:18:16.033358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.077 [2024-11-17 08:18:16.071259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.077 [2024-11-17 08:18:16.071299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.077 [2024-11-17 08:18:16.071329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.077 [2024-11-17 08:18:16.071344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.077 [2024-11-17 08:18:16.071443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.077 [2024-11-17 08:18:16.071458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.077 [2024-11-17 08:18:16.071469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.077 [2024-11-17 08:18:16.071479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.077 [2024-11-17 08:18:16.071585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.077 [2024-11-17 08:18:16.071603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.077 [2024-11-17 08:18:16.071614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.077 [2024-11-17 08:18:16.071624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.077 [2024-11-17 08:18:16.071656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.077 [2024-11-17 08:18:16.071669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.077 [2024-11-17 08:18:16.071679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.077 [2024-11-17 08:18:16.071689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.154680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.154728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.338 [2024-11-17 08:18:16.154758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.154768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.219435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.219499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.338 [2024-11-17 08:18:16.219531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.219541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.219615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.219630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.338 [2024-11-17 08:18:16.219641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.219651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.219682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.219729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.338 [2024-11-17 08:18:16.219739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.219748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.219907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.219925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.338 [2024-11-17 08:18:16.219937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.219947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.219997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.220020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:11.338 [2024-11-17 08:18:16.220037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.220048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.220092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.220105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.338 [2024-11-17 08:18:16.220116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.220126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.220193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.338 [2024-11-17 08:18:16.220216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.338 [2024-11-17 08:18:16.220227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.338 [2024-11-17 08:18:16.220237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.338 [2024-11-17 08:18:16.220387] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.856 ms, result 0 00:18:12.276 00:18:12.276 00:18:12.276 08:18:16 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=75271 00:18:12.276 08:18:16 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:12.276 08:18:16 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 75271 00:18:12.276 08:18:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 75271 ']' 00:18:12.276 08:18:16 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.276 08:18:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.276 08:18:16 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.276 08:18:16 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.276 08:18:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:12.276 [2024-11-17 08:18:17.099130] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:12.276 [2024-11-17 08:18:17.099324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75271 ] 00:18:12.276 [2024-11-17 08:18:17.275705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.535 [2024-11-17 08:18:17.357636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.104 08:18:17 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.104 08:18:17 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:13.104 08:18:17 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:13.363 [2024-11-17 08:18:18.245712] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:13.363 [2024-11-17 08:18:18.245783] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:13.624 [2024-11-17 08:18:18.429512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.429575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:13.624 [2024-11-17 08:18:18.429612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:13.624 [2024-11-17 08:18:18.429624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.433069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.433135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.624 [2024-11-17 08:18:18.433169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.421 ms 00:18:13.624 [2024-11-17 08:18:18.433180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.433307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:13.624 [2024-11-17 08:18:18.434226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:13.624 [2024-11-17 08:18:18.434297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.434327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.624 [2024-11-17 08:18:18.434341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:18:13.624 [2024-11-17 08:18:18.434351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.435671] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:13.624 [2024-11-17 08:18:18.449430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.449493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:13.624 [2024-11-17 08:18:18.449511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.765 ms 00:18:13.624 [2024-11-17 08:18:18.449527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.449634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.449691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:13.624 [2024-11-17 08:18:18.449720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:13.624 [2024-11-17 08:18:18.449737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.453890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.453973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.624 [2024-11-17 08:18:18.453989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.083 ms 00:18:13.624 [2024-11-17 08:18:18.454005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.454162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.454187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.624 [2024-11-17 08:18:18.454215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:13.624 [2024-11-17 08:18:18.454242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.454286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.454303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:13.624 [2024-11-17 08:18:18.454315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:13.624 [2024-11-17 08:18:18.454327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.454358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:13.624 [2024-11-17 08:18:18.458026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.458075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.624 [2024-11-17 08:18:18.458125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.670 ms 00:18:13.624 [2024-11-17 08:18:18.458138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.458207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.458225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:13.624 [2024-11-17 08:18:18.458242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:13.624 [2024-11-17 08:18:18.458258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.458291] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:13.624 [2024-11-17 08:18:18.458350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:13.624 [2024-11-17 08:18:18.458406] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:13.624 [2024-11-17 08:18:18.458430] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:13.624 [2024-11-17 08:18:18.458541] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:13.624 [2024-11-17 08:18:18.458564] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:13.624 [2024-11-17 08:18:18.458588] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:13.624 [2024-11-17 08:18:18.458609] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:13.624 [2024-11-17 08:18:18.458629] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:13.624 [2024-11-17 08:18:18.458642] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:13.624 [2024-11-17 08:18:18.458659] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:13.624 [2024-11-17 08:18:18.458671] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:13.624 [2024-11-17 08:18:18.458692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:13.624 [2024-11-17 08:18:18.458705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.458722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:13.624 [2024-11-17 08:18:18.458735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:18:13.624 [2024-11-17 08:18:18.458751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.458848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.624 [2024-11-17 08:18:18.458870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:13.624 [2024-11-17 08:18:18.458884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:13.624 [2024-11-17 08:18:18.458900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.624 [2024-11-17 08:18:18.459017] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:13.624 [2024-11-17 08:18:18.459041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:13.624 [2024-11-17 08:18:18.459054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.624 [2024-11-17 08:18:18.459067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.624 [2024-11-17 08:18:18.459093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:13.624 [2024-11-17 08:18:18.459110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:13.625 [2024-11-17 08:18:18.459151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.625 [2024-11-17 08:18:18.459175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:13.625 [2024-11-17 08:18:18.459188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:13.625 [2024-11-17 08:18:18.459199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:13.625 [2024-11-17 08:18:18.459219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:13.625 [2024-11-17 08:18:18.459230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:13.625 [2024-11-17 08:18:18.459243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:13.625 [2024-11-17 08:18:18.459266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:13.625 [2024-11-17 08:18:18.459311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:13.625 [2024-11-17 08:18:18.459352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:13.625 [2024-11-17 08:18:18.459415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:13.625 [2024-11-17 08:18:18.459451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:13.625 [2024-11-17 08:18:18.459489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.625 [2024-11-17 08:18:18.459512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:13.625 [2024-11-17 08:18:18.459525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:13.625 [2024-11-17 08:18:18.459536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:13.625 [2024-11-17 08:18:18.459549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:13.625 [2024-11-17 08:18:18.459561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:13.625 [2024-11-17 08:18:18.459575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:13.625 [2024-11-17 08:18:18.459599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:13.625 [2024-11-17 08:18:18.459610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459623] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:13.625 [2024-11-17 08:18:18.459636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:13.625 [2024-11-17 08:18:18.459652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:13.625 [2024-11-17 08:18:18.459677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:13.625 [2024-11-17 08:18:18.459688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:13.625 [2024-11-17 08:18:18.459701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:13.625 [2024-11-17 08:18:18.459712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:13.625 [2024-11-17 08:18:18.459739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:13.625 [2024-11-17 08:18:18.459750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:13.625 [2024-11-17 08:18:18.459764] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:13.625 [2024-11-17 08:18:18.459779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.625 [2024-11-17 08:18:18.459798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:13.625 [2024-11-17 08:18:18.459810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:13.625 [2024-11-17 08:18:18.459823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:13.625 [2024-11-17 08:18:18.459835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:13.625 [2024-11-17 08:18:18.459848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:13.625 [2024-11-17 08:18:18.459859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:13.625 [2024-11-17 08:18:18.459872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:13.625 [2024-11-17 08:18:18.459883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:13.625 [2024-11-17 08:18:18.459896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:13.625 [2024-11-17 08:18:18.459908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:13.625 [2024-11-17 08:18:18.459921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:13.625 [2024-11-17 08:18:18.459932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:13.625 [2024-11-17 08:18:18.459945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:13.625 [2024-11-17 08:18:18.459957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:13.625 [2024-11-17 08:18:18.459970] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:13.625 [2024-11-17 08:18:18.459983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:13.625 [2024-11-17 08:18:18.459998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:13.625 [2024-11-17 08:18:18.460010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:13.625 [2024-11-17 08:18:18.460023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:13.625 [2024-11-17 08:18:18.460035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:13.625 [2024-11-17 08:18:18.460049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.460060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:13.625 [2024-11-17 08:18:18.460073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:18:13.625 [2024-11-17 08:18:18.460084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.625 [2024-11-17 08:18:18.488302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.488366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.625 [2024-11-17 08:18:18.488406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.105 ms 00:18:13.625 [2024-11-17 08:18:18.488420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.625 [2024-11-17 08:18:18.488625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.488660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.625 [2024-11-17 08:18:18.488679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:13.625 [2024-11-17 08:18:18.488692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.625 [2024-11-17 08:18:18.523606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.523665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.625 [2024-11-17 08:18:18.523705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.881 ms 00:18:13.625 [2024-11-17 08:18:18.523716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.625 [2024-11-17 08:18:18.523852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.523870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.625 [2024-11-17 08:18:18.523915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:13.625 [2024-11-17 08:18:18.523926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.625 [2024-11-17 08:18:18.524286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.524313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.625 [2024-11-17 08:18:18.524333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:18:13.625 [2024-11-17 08:18:18.524345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.625 [2024-11-17 08:18:18.524552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.524574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.625 [2024-11-17 08:18:18.524589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:18:13.625 [2024-11-17 08:18:18.524600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.625 [2024-11-17 08:18:18.540586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.625 [2024-11-17 08:18:18.540622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.626 [2024-11-17 08:18:18.540656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.957 ms 00:18:13.626 [2024-11-17 08:18:18.540666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.626 [2024-11-17 08:18:18.553696] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:13.626 [2024-11-17 08:18:18.553730] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:13.626 [2024-11-17 08:18:18.553767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.626 [2024-11-17 08:18:18.553781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:13.626 [2024-11-17 08:18:18.553797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.971 ms 00:18:13.626 [2024-11-17 08:18:18.553808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.626 [2024-11-17 08:18:18.577061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.626 [2024-11-17 08:18:18.577119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:13.626 [2024-11-17 08:18:18.577158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.164 ms 00:18:13.626 [2024-11-17 08:18:18.577171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.626 [2024-11-17 08:18:18.589429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.626 [2024-11-17 08:18:18.589463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:13.626 [2024-11-17 08:18:18.589503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.148 ms 00:18:13.626 [2024-11-17 08:18:18.589515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.626 [2024-11-17 08:18:18.601656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.626 [2024-11-17 08:18:18.601689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:13.626 [2024-11-17 08:18:18.601725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.055 ms 00:18:13.626 [2024-11-17 08:18:18.601737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.626 [2024-11-17 08:18:18.602592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.626 [2024-11-17 08:18:18.602620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.626 [2024-11-17 08:18:18.602655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:18:13.626 [2024-11-17 08:18:18.602666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.885 [2024-11-17 08:18:18.670901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.885 [2024-11-17 08:18:18.670967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:13.885 [2024-11-17 08:18:18.671008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.197 ms 00:18:13.885 [2024-11-17 08:18:18.671021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.885 [2024-11-17 08:18:18.680923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:13.885 [2024-11-17 08:18:18.692014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.886 [2024-11-17 08:18:18.692109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.886 [2024-11-17 08:18:18.692134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.847 ms 00:18:13.886 [2024-11-17 08:18:18.692147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.886 [2024-11-17 08:18:18.692265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.886 [2024-11-17 08:18:18.692285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:13.886 [2024-11-17 08:18:18.692297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:13.886 [2024-11-17 08:18:18.692309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.886 [2024-11-17 08:18:18.692416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.886 [2024-11-17 08:18:18.692434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.886 [2024-11-17 08:18:18.692447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:13.886 [2024-11-17 08:18:18.692460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.886 [2024-11-17 08:18:18.692491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.886 [2024-11-17 08:18:18.692506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.886 [2024-11-17 08:18:18.692518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.886 [2024-11-17 08:18:18.692533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.886 [2024-11-17 08:18:18.692572] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:13.886 [2024-11-17 08:18:18.692591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.886 [2024-11-17 08:18:18.692603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:13.886 [2024-11-17 08:18:18.692619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:13.886 [2024-11-17 08:18:18.692631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.886 [2024-11-17 08:18:18.716864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.886 [2024-11-17 08:18:18.716901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.886 [2024-11-17 08:18:18.716935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.201 ms 00:18:13.886 [2024-11-17 08:18:18.716946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.886 [2024-11-17 08:18:18.717048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.886 [2024-11-17 08:18:18.717065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.886 [2024-11-17 08:18:18.717092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:13.886 [2024-11-17 08:18:18.717108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.886 [2024-11-17 08:18:18.718344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.886 [2024-11-17 08:18:18.721790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.431 ms, result 0 00:18:13.886 [2024-11-17 08:18:18.723115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.886 Some configs were skipped because the RPC state that can call them passed over. 00:18:13.886 08:18:18 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:14.146 [2024-11-17 08:18:19.010433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.146 [2024-11-17 08:18:19.010510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:14.146 [2024-11-17 08:18:19.010529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:18:14.146 [2024-11-17 08:18:19.010560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.146 [2024-11-17 08:18:19.010610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.629 ms, result 0 00:18:14.146 true 00:18:14.146 08:18:19 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:14.405 [2024-11-17 08:18:19.270544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.405 [2024-11-17 08:18:19.270592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:14.405 [2024-11-17 08:18:19.270633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:18:14.405 [2024-11-17 08:18:19.270661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.405 [2024-11-17 08:18:19.270745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.460 ms, result 0 00:18:14.405 true 00:18:14.405 08:18:19 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 75271 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 75271 ']' 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 75271 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75271 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75271' 00:18:14.405 killing process with pid 75271 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 75271 00:18:14.405 08:18:19 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 75271 00:18:15.343 [2024-11-17 08:18:20.080587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.343 [2024-11-17 08:18:20.080899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:15.343 [2024-11-17 08:18:20.081022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:15.343 [2024-11-17 08:18:20.081048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.343 [2024-11-17 08:18:20.081116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:15.343 [2024-11-17 08:18:20.084029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.343 [2024-11-17 08:18:20.084203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:15.343 [2024-11-17 08:18:20.084234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:18:15.343 [2024-11-17 08:18:20.084246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.343 [2024-11-17 08:18:20.084532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.343 [2024-11-17 08:18:20.084550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:15.343 [2024-11-17 08:18:20.084563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:18:15.343 [2024-11-17 08:18:20.084574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.343 [2024-11-17 08:18:20.088167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.088217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:15.344 [2024-11-17 08:18:20.088239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.568 ms 00:18:15.344 [2024-11-17 08:18:20.088251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.094208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.094239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:15.344 [2024-11-17 08:18:20.094271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.884 ms 00:18:15.344 [2024-11-17 08:18:20.094282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.104331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.104363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:15.344 [2024-11-17 08:18:20.104381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.993 ms 00:18:15.344 [2024-11-17 08:18:20.104400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.111740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.111775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:15.344 [2024-11-17 08:18:20.111795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.298 ms 00:18:15.344 [2024-11-17 08:18:20.111806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.111943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.111961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:15.344 [2024-11-17 08:18:20.111974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:15.344 [2024-11-17 08:18:20.111984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.122632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.122664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:15.344 [2024-11-17 08:18:20.122680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.625 ms 00:18:15.344 [2024-11-17 08:18:20.122690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.132889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.132922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:15.344 [2024-11-17 08:18:20.132946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.138 ms 00:18:15.344 [2024-11-17 08:18:20.132957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.142822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.142855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:15.344 [2024-11-17 08:18:20.142878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.803 ms 00:18:15.344 [2024-11-17 08:18:20.142889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.152875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.344 [2024-11-17 08:18:20.152909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:15.344 [2024-11-17 08:18:20.152928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.911 ms 00:18:15.344 [2024-11-17 08:18:20.152939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.344 [2024-11-17 08:18:20.152982] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:15.344 [2024-11-17 08:18:20.153001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:15.344 [2024-11-17 08:18:20.153991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:15.345 [2024-11-17 08:18:20.154635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:15.345 [2024-11-17 08:18:20.154662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6c485b1-efda-4eee-863c-5f40f9b95397 00:18:15.345 [2024-11-17 08:18:20.154688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:15.345 [2024-11-17 08:18:20.154713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:15.345 [2024-11-17 08:18:20.154724] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:15.345 [2024-11-17 08:18:20.154740] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:15.345 [2024-11-17 08:18:20.154752] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:15.345 [2024-11-17 08:18:20.154767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:15.345 [2024-11-17 08:18:20.154780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:15.345 [2024-11-17 08:18:20.154795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:15.345 [2024-11-17 08:18:20.154806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:15.345 [2024-11-17 08:18:20.154822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.345 [2024-11-17 08:18:20.154834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:15.345 [2024-11-17 08:18:20.154851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:18:15.345 [2024-11-17 08:18:20.154864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.345 [2024-11-17 08:18:20.168385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.345 [2024-11-17 08:18:20.168418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:15.345 [2024-11-17 08:18:20.168443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.466 ms 00:18:15.345 [2024-11-17 08:18:20.168455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.345 [2024-11-17 08:18:20.168845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.345 [2024-11-17 08:18:20.168866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:15.345 [2024-11-17 08:18:20.168884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:18:15.345 [2024-11-17 08:18:20.168901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.345 [2024-11-17 08:18:20.213892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.345 [2024-11-17 08:18:20.213931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:15.345 [2024-11-17 08:18:20.213951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.345 [2024-11-17 08:18:20.213962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.345 [2024-11-17 08:18:20.214129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.345 [2024-11-17 08:18:20.214149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:15.345 [2024-11-17 08:18:20.214167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.345 [2024-11-17 08:18:20.214185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.345 [2024-11-17 08:18:20.214264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.345 [2024-11-17 08:18:20.214283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:15.345 [2024-11-17 08:18:20.214305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.345 [2024-11-17 08:18:20.214317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.345 [2024-11-17 08:18:20.214347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.345 [2024-11-17 08:18:20.214372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:15.345 [2024-11-17 08:18:20.214389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.345 [2024-11-17 08:18:20.214402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.345 [2024-11-17 08:18:20.293002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.345 [2024-11-17 08:18:20.293059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:15.345 [2024-11-17 08:18:20.293117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.345 [2024-11-17 08:18:20.293132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.359202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.605 [2024-11-17 08:18:20.359252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:15.605 [2024-11-17 08:18:20.359292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.605 [2024-11-17 08:18:20.359310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.359487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.605 [2024-11-17 08:18:20.359506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.605 [2024-11-17 08:18:20.359530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.605 [2024-11-17 08:18:20.359543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.359600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.605 [2024-11-17 08:18:20.359615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.605 [2024-11-17 08:18:20.359634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.605 [2024-11-17 08:18:20.359647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.359794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.605 [2024-11-17 08:18:20.359814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.605 [2024-11-17 08:18:20.359833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.605 [2024-11-17 08:18:20.359846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.359907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.605 [2024-11-17 08:18:20.359931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:15.605 [2024-11-17 08:18:20.359951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.605 [2024-11-17 08:18:20.359964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.360019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.605 [2024-11-17 08:18:20.360040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.605 [2024-11-17 08:18:20.360062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.605 [2024-11-17 08:18:20.360074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.360172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:15.605 [2024-11-17 08:18:20.360189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.605 [2024-11-17 08:18:20.360208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:15.605 [2024-11-17 08:18:20.360220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.605 [2024-11-17 08:18:20.360385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.764 ms, result 0 00:18:16.174 08:18:21 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:16.174 [2024-11-17 08:18:21.181830] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:16.174 [2024-11-17 08:18:21.182005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75329 ] 00:18:16.433 [2024-11-17 08:18:21.357929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.433 [2024-11-17 08:18:21.438601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.693 [2024-11-17 08:18:21.690436] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.693 [2024-11-17 08:18:21.690504] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.954 [2024-11-17 08:18:21.846686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.846886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:16.954 [2024-11-17 08:18:21.846914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:16.954 [2024-11-17 08:18:21.846926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.849768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.849806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:16.954 [2024-11-17 08:18:21.849820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.808 ms 00:18:16.954 [2024-11-17 08:18:21.849830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.849948] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:16.954 [2024-11-17 08:18:21.850941] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:16.954 [2024-11-17 08:18:21.851185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.851299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:16.954 [2024-11-17 08:18:21.851323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:18:16.954 [2024-11-17 08:18:21.851335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.852670] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:16.954 [2024-11-17 08:18:21.865846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.866020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:16.954 [2024-11-17 08:18:21.866047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.177 ms 00:18:16.954 [2024-11-17 08:18:21.866059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.866214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.866235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:16.954 [2024-11-17 08:18:21.866248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:16.954 [2024-11-17 08:18:21.866259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.870422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.870455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:16.954 [2024-11-17 08:18:21.870468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.100 ms 00:18:16.954 [2024-11-17 08:18:21.870493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.870613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.870632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:16.954 [2024-11-17 08:18:21.870660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:16.954 [2024-11-17 08:18:21.870670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.870704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.870723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:16.954 [2024-11-17 08:18:21.870735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:16.954 [2024-11-17 08:18:21.870745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.870773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:16.954 [2024-11-17 08:18:21.874421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.874452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:16.954 [2024-11-17 08:18:21.874465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:18:16.954 [2024-11-17 08:18:21.874475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.874515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.874529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:16.954 [2024-11-17 08:18:21.874539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:16.954 [2024-11-17 08:18:21.874548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.874586] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:16.954 [2024-11-17 08:18:21.874616] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:16.954 [2024-11-17 08:18:21.874651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:16.954 [2024-11-17 08:18:21.874668] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:16.954 [2024-11-17 08:18:21.874757] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:16.954 [2024-11-17 08:18:21.874771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:16.954 [2024-11-17 08:18:21.874782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:16.954 [2024-11-17 08:18:21.874794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:16.954 [2024-11-17 08:18:21.874810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:16.954 [2024-11-17 08:18:21.874820] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:16.954 [2024-11-17 08:18:21.874829] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:16.954 [2024-11-17 08:18:21.874838] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:16.954 [2024-11-17 08:18:21.874846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:16.954 [2024-11-17 08:18:21.874856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.874866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:16.954 [2024-11-17 08:18:21.874876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:16.954 [2024-11-17 08:18:21.874885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.874966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.954 [2024-11-17 08:18:21.874980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:16.954 [2024-11-17 08:18:21.874995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:16.954 [2024-11-17 08:18:21.875004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.954 [2024-11-17 08:18:21.875127] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:16.954 [2024-11-17 08:18:21.875145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:16.954 [2024-11-17 08:18:21.875156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.954 [2024-11-17 08:18:21.875166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:16.954 [2024-11-17 08:18:21.875185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:16.954 [2024-11-17 08:18:21.875205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:16.954 [2024-11-17 08:18:21.875214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.954 [2024-11-17 08:18:21.875233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:16.954 [2024-11-17 08:18:21.875242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:16.954 [2024-11-17 08:18:21.875251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.954 [2024-11-17 08:18:21.875273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:16.954 [2024-11-17 08:18:21.875284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:16.954 [2024-11-17 08:18:21.875293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:16.954 [2024-11-17 08:18:21.875312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:16.954 [2024-11-17 08:18:21.875321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:16.954 [2024-11-17 08:18:21.875339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.954 [2024-11-17 08:18:21.875357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:16.954 [2024-11-17 08:18:21.875392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.954 [2024-11-17 08:18:21.875428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:16.954 [2024-11-17 08:18:21.875439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:16.954 [2024-11-17 08:18:21.875463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.954 [2024-11-17 08:18:21.875473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:16.955 [2024-11-17 08:18:21.875483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:16.955 [2024-11-17 08:18:21.875492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.955 [2024-11-17 08:18:21.875502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:16.955 [2024-11-17 08:18:21.875512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:16.955 [2024-11-17 08:18:21.875522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.955 [2024-11-17 08:18:21.875531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:16.955 [2024-11-17 08:18:21.875541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:16.955 [2024-11-17 08:18:21.875551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.955 [2024-11-17 08:18:21.875561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:16.955 [2024-11-17 08:18:21.875571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:16.955 [2024-11-17 08:18:21.875580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.955 [2024-11-17 08:18:21.875590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:16.955 [2024-11-17 08:18:21.875601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:16.955 [2024-11-17 08:18:21.875611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.955 [2024-11-17 08:18:21.875621] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:16.955 [2024-11-17 08:18:21.875632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:16.955 [2024-11-17 08:18:21.875643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.955 [2024-11-17 08:18:21.875658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.955 [2024-11-17 08:18:21.875669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:16.955 [2024-11-17 08:18:21.875680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:16.955 [2024-11-17 08:18:21.875689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:16.955 [2024-11-17 08:18:21.875728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:16.955 [2024-11-17 08:18:21.875752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:16.955 [2024-11-17 08:18:21.875776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:16.955 [2024-11-17 08:18:21.875787] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:16.955 [2024-11-17 08:18:21.875798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.955 [2024-11-17 08:18:21.875809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:16.955 [2024-11-17 08:18:21.875818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:16.955 [2024-11-17 08:18:21.875828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:16.955 [2024-11-17 08:18:21.875837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:16.955 [2024-11-17 08:18:21.875846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:16.955 [2024-11-17 08:18:21.875855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:16.955 [2024-11-17 08:18:21.875865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:16.955 [2024-11-17 08:18:21.875874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:16.955 [2024-11-17 08:18:21.875884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:16.955 [2024-11-17 08:18:21.875893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:16.955 [2024-11-17 08:18:21.875902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:16.955 [2024-11-17 08:18:21.875912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:16.955 [2024-11-17 08:18:21.875921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:16.955 [2024-11-17 08:18:21.875930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:16.955 [2024-11-17 08:18:21.875940] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:16.955 [2024-11-17 08:18:21.875950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.955 [2024-11-17 08:18:21.875961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:16.955 [2024-11-17 08:18:21.875971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:16.955 [2024-11-17 08:18:21.875982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:16.955 [2024-11-17 08:18:21.875992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:16.955 [2024-11-17 08:18:21.876002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.955 [2024-11-17 08:18:21.876012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:16.955 [2024-11-17 08:18:21.876025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:18:16.955 [2024-11-17 08:18:21.876035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.955 [2024-11-17 08:18:21.907153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.955 [2024-11-17 08:18:21.907421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.955 [2024-11-17 08:18:21.907555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.057 ms 00:18:16.955 [2024-11-17 08:18:21.907611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.955 [2024-11-17 08:18:21.907978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.955 [2024-11-17 08:18:21.908182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.955 [2024-11-17 08:18:21.908306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:16.955 [2024-11-17 08:18:21.908414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.955 [2024-11-17 08:18:21.956482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.955 [2024-11-17 08:18:21.956700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.955 [2024-11-17 08:18:21.956820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.948 ms 00:18:16.955 [2024-11-17 08:18:21.956877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.955 [2024-11-17 08:18:21.957051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.955 [2024-11-17 08:18:21.957146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.955 [2024-11-17 08:18:21.957190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:16.955 [2024-11-17 08:18:21.957300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.955 [2024-11-17 08:18:21.957669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.955 [2024-11-17 08:18:21.957792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.955 [2024-11-17 08:18:21.957893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:16.955 [2024-11-17 08:18:21.958014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.955 [2024-11-17 08:18:21.958226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.955 [2024-11-17 08:18:21.958253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.955 [2024-11-17 08:18:21.958267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:18:16.955 [2024-11-17 08:18:21.958293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:21.973904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:21.973941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.215 [2024-11-17 08:18:21.973972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.582 ms 00:18:17.215 [2024-11-17 08:18:21.973983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:21.987492] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:17.215 [2024-11-17 08:18:21.987533] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:17.215 [2024-11-17 08:18:21.987567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:21.987578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:17.215 [2024-11-17 08:18:21.987590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.402 ms 00:18:17.215 [2024-11-17 08:18:21.987601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.011609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.011673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:17.215 [2024-11-17 08:18:22.011720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.920 ms 00:18:17.215 [2024-11-17 08:18:22.011760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.024706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.024741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:17.215 [2024-11-17 08:18:22.024771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.862 ms 00:18:17.215 [2024-11-17 08:18:22.024781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.037481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.037516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:17.215 [2024-11-17 08:18:22.037546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.620 ms 00:18:17.215 [2024-11-17 08:18:22.037556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.038288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.038323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:17.215 [2024-11-17 08:18:22.038339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:18:17.215 [2024-11-17 08:18:22.038350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.098463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.098524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:17.215 [2024-11-17 08:18:22.098557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.080 ms 00:18:17.215 [2024-11-17 08:18:22.098569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.109577] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:17.215 [2024-11-17 08:18:22.121660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.121715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:17.215 [2024-11-17 08:18:22.121748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.941 ms 00:18:17.215 [2024-11-17 08:18:22.121758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.121889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.121907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:17.215 [2024-11-17 08:18:22.121919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:17.215 [2024-11-17 08:18:22.121929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.121990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.122005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:17.215 [2024-11-17 08:18:22.122016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:17.215 [2024-11-17 08:18:22.122026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.122063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.122081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:17.215 [2024-11-17 08:18:22.122124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:17.215 [2024-11-17 08:18:22.122154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.122198] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:17.215 [2024-11-17 08:18:22.122226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.122253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:17.215 [2024-11-17 08:18:22.122264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:17.215 [2024-11-17 08:18:22.122275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.147409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.147452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:17.215 [2024-11-17 08:18:22.147484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.090 ms 00:18:17.215 [2024-11-17 08:18:22.147495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.147606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.215 [2024-11-17 08:18:22.147626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:17.215 [2024-11-17 08:18:22.147638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:17.215 [2024-11-17 08:18:22.147648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.215 [2024-11-17 08:18:22.148879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:17.215 [2024-11-17 08:18:22.152538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.657 ms, result 0 00:18:17.215 [2024-11-17 08:18:22.153424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.215 [2024-11-17 08:18:22.167673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:18.595  [2024-11-17T08:18:24.544Z] Copying: 24/256 [MB] (24 MBps) [2024-11-17T08:18:25.483Z] Copying: 46/256 [MB] (21 MBps) [2024-11-17T08:18:26.420Z] Copying: 68/256 [MB] (21 MBps) [2024-11-17T08:18:27.359Z] Copying: 90/256 [MB] (21 MBps) [2024-11-17T08:18:28.297Z] Copying: 112/256 [MB] (21 MBps) [2024-11-17T08:18:29.256Z] Copying: 133/256 [MB] (21 MBps) [2024-11-17T08:18:30.236Z] Copying: 155/256 [MB] (21 MBps) [2024-11-17T08:18:31.615Z] Copying: 175/256 [MB] (19 MBps) [2024-11-17T08:18:32.552Z] Copying: 197/256 [MB] (21 MBps) [2024-11-17T08:18:33.489Z] Copying: 218/256 [MB] (21 MBps) [2024-11-17T08:18:34.058Z] Copying: 239/256 [MB] (21 MBps) [2024-11-17T08:18:34.317Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-17 08:18:34.150657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:29.305 [2024-11-17 08:18:34.166145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.305 [2024-11-17 08:18:34.166383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:29.305 [2024-11-17 08:18:34.166583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:29.305 [2024-11-17 08:18:34.166690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.305 [2024-11-17 08:18:34.166850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:29.305 [2024-11-17 08:18:34.170293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.305 [2024-11-17 08:18:34.170507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:29.305 [2024-11-17 08:18:34.170568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:18:29.305 [2024-11-17 08:18:34.170581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.305 [2024-11-17 08:18:34.170943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.305 [2024-11-17 08:18:34.170965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:29.305 [2024-11-17 08:18:34.170979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:18:29.305 [2024-11-17 08:18:34.170991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.305 [2024-11-17 08:18:34.174610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.305 [2024-11-17 08:18:34.174660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:29.305 [2024-11-17 08:18:34.174691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.581 ms 00:18:29.305 [2024-11-17 08:18:34.174704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.306 [2024-11-17 08:18:34.181624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.306 [2024-11-17 08:18:34.181670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:29.306 [2024-11-17 08:18:34.181700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:18:29.306 [2024-11-17 08:18:34.181711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.306 [2024-11-17 08:18:34.211347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.306 [2024-11-17 08:18:34.211574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:29.306 [2024-11-17 08:18:34.211606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.571 ms 00:18:29.306 [2024-11-17 08:18:34.211619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.306 [2024-11-17 08:18:34.228645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.306 [2024-11-17 08:18:34.228690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:29.306 [2024-11-17 08:18:34.228721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.934 ms 00:18:29.306 [2024-11-17 08:18:34.228736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.306 [2024-11-17 08:18:34.228879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.306 [2024-11-17 08:18:34.228913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:29.306 [2024-11-17 08:18:34.228926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:29.306 [2024-11-17 08:18:34.228936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.306 [2024-11-17 08:18:34.256607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.306 [2024-11-17 08:18:34.256643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:29.306 [2024-11-17 08:18:34.256674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.636 ms 00:18:29.306 [2024-11-17 08:18:34.256685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.306 [2024-11-17 08:18:34.282723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.306 [2024-11-17 08:18:34.282772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:29.306 [2024-11-17 08:18:34.282804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.960 ms 00:18:29.306 [2024-11-17 08:18:34.282814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.306 [2024-11-17 08:18:34.308283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.306 [2024-11-17 08:18:34.308319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:29.306 [2024-11-17 08:18:34.308350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.409 ms 00:18:29.306 [2024-11-17 08:18:34.308359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.567 [2024-11-17 08:18:34.335511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.567 [2024-11-17 08:18:34.335676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:29.567 [2024-11-17 08:18:34.335721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.067 ms 00:18:29.567 [2024-11-17 08:18:34.335748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.567 [2024-11-17 08:18:34.335830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:29.567 [2024-11-17 08:18:34.335854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.335996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:29.567 [2024-11-17 08:18:34.336275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.336981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.337006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.337017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.337028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.337039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.337049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:29.568 [2024-11-17 08:18:34.337068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:29.568 [2024-11-17 08:18:34.337079] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6c485b1-efda-4eee-863c-5f40f9b95397 00:18:29.568 [2024-11-17 08:18:34.337090] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:29.568 [2024-11-17 08:18:34.337100] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:29.568 [2024-11-17 08:18:34.337109] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:29.568 [2024-11-17 08:18:34.337120] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:29.568 [2024-11-17 08:18:34.337130] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:29.568 [2024-11-17 08:18:34.337140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:29.568 [2024-11-17 08:18:34.337161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:29.568 [2024-11-17 08:18:34.337173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:29.568 [2024-11-17 08:18:34.337182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:29.568 [2024-11-17 08:18:34.337193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.568 [2024-11-17 08:18:34.337209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:29.568 [2024-11-17 08:18:34.337220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:18:29.568 [2024-11-17 08:18:34.337231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.568 [2024-11-17 08:18:34.351311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.568 [2024-11-17 08:18:34.351346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:29.568 [2024-11-17 08:18:34.351415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.056 ms 00:18:29.569 [2024-11-17 08:18:34.351426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.351865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.569 [2024-11-17 08:18:34.351886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:29.569 [2024-11-17 08:18:34.351898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:18:29.569 [2024-11-17 08:18:34.351907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.387741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.387781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.569 [2024-11-17 08:18:34.387796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.387805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.387914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.387931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.569 [2024-11-17 08:18:34.387942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.387951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.388008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.388025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.569 [2024-11-17 08:18:34.388035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.388045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.388067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.388118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.569 [2024-11-17 08:18:34.388146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.388155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.466620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.466886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.569 [2024-11-17 08:18:34.466913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.466925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.531811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.532008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.569 [2024-11-17 08:18:34.532036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.532048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.532192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.532214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.569 [2024-11-17 08:18:34.532226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.532236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.532270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.532284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.569 [2024-11-17 08:18:34.532300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.532310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.532442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.532460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.569 [2024-11-17 08:18:34.532472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.532482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.532544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.532561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:29.569 [2024-11-17 08:18:34.532572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.532588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.532659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.532673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.569 [2024-11-17 08:18:34.532682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.532691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.532737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.569 [2024-11-17 08:18:34.532750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.569 [2024-11-17 08:18:34.532765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.569 [2024-11-17 08:18:34.532774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.569 [2024-11-17 08:18:34.532912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.802 ms, result 0 00:18:30.507 00:18:30.507 00:18:30.507 08:18:35 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:31.075 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:31.075 08:18:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:31.075 08:18:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:31.075 08:18:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:31.075 08:18:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.075 08:18:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:31.075 08:18:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:31.075 08:18:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 75271 00:18:31.075 08:18:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 75271 ']' 00:18:31.075 08:18:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 75271 00:18:31.075 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75271) - No such process 00:18:31.075 Process with pid 75271 is not found 00:18:31.075 08:18:35 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 75271 is not found' 00:18:31.075 ************************************ 00:18:31.075 END TEST ftl_trim 00:18:31.075 ************************************ 00:18:31.075 00:18:31.075 real 1m9.018s 00:18:31.075 user 1m33.537s 00:18:31.075 sys 0m6.479s 00:18:31.075 08:18:35 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.075 08:18:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:31.075 08:18:35 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:31.075 08:18:35 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:31.075 08:18:35 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.075 08:18:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:31.076 ************************************ 00:18:31.076 START TEST ftl_restore 00:18:31.076 ************************************ 00:18:31.076 08:18:35 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:31.076 * Looking for test storage... 00:18:31.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.076 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:31.076 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.076 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.336 08:18:36 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.336 --rc genhtml_branch_coverage=1 00:18:31.336 --rc genhtml_function_coverage=1 00:18:31.336 --rc genhtml_legend=1 00:18:31.336 --rc geninfo_all_blocks=1 00:18:31.336 --rc geninfo_unexecuted_blocks=1 00:18:31.336 00:18:31.336 ' 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.336 --rc genhtml_branch_coverage=1 00:18:31.336 --rc genhtml_function_coverage=1 00:18:31.336 --rc genhtml_legend=1 00:18:31.336 --rc geninfo_all_blocks=1 00:18:31.336 --rc geninfo_unexecuted_blocks=1 00:18:31.336 00:18:31.336 ' 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.336 --rc genhtml_branch_coverage=1 00:18:31.336 --rc genhtml_function_coverage=1 00:18:31.336 --rc genhtml_legend=1 00:18:31.336 --rc geninfo_all_blocks=1 00:18:31.336 --rc geninfo_unexecuted_blocks=1 00:18:31.336 00:18:31.336 ' 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.336 --rc genhtml_branch_coverage=1 00:18:31.336 --rc genhtml_function_coverage=1 00:18:31.336 --rc genhtml_legend=1 00:18:31.336 --rc geninfo_all_blocks=1 00:18:31.336 --rc geninfo_unexecuted_blocks=1 00:18:31.336 00:18:31.336 ' 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.1w3m1mMXDI 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=75538 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 75538 00:18:31.336 08:18:36 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 75538 ']' 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.336 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.337 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.337 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.337 08:18:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:31.337 [2024-11-17 08:18:36.288877] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:31.337 [2024-11-17 08:18:36.289047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75538 ] 00:18:31.596 [2024-11-17 08:18:36.475894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.596 [2024-11-17 08:18:36.600242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.533 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.533 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:18:32.533 08:18:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:32.533 08:18:37 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:32.533 08:18:37 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:32.533 08:18:37 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:32.533 08:18:37 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:32.533 08:18:37 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:32.792 08:18:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:32.792 08:18:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:32.792 08:18:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:32.792 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:32.792 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:32.792 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:32.792 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:32.792 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:33.051 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:33.051 { 00:18:33.051 "name": "nvme0n1", 00:18:33.051 "aliases": [ 00:18:33.051 "cadd4891-5413-458e-aa53-dec515a168b8" 00:18:33.051 ], 00:18:33.051 "product_name": "NVMe disk", 00:18:33.051 "block_size": 4096, 00:18:33.051 "num_blocks": 1310720, 00:18:33.051 "uuid": "cadd4891-5413-458e-aa53-dec515a168b8", 00:18:33.051 "numa_id": -1, 00:18:33.051 "assigned_rate_limits": { 00:18:33.051 "rw_ios_per_sec": 0, 00:18:33.051 "rw_mbytes_per_sec": 0, 00:18:33.051 "r_mbytes_per_sec": 0, 00:18:33.051 "w_mbytes_per_sec": 0 00:18:33.051 }, 00:18:33.051 "claimed": true, 00:18:33.051 "claim_type": "read_many_write_one", 00:18:33.051 "zoned": false, 00:18:33.051 "supported_io_types": { 00:18:33.051 "read": true, 00:18:33.051 "write": true, 00:18:33.051 "unmap": true, 00:18:33.051 "flush": true, 00:18:33.051 "reset": true, 00:18:33.051 "nvme_admin": true, 00:18:33.051 "nvme_io": true, 00:18:33.051 "nvme_io_md": false, 00:18:33.051 "write_zeroes": true, 00:18:33.051 "zcopy": false, 00:18:33.051 "get_zone_info": false, 00:18:33.051 "zone_management": false, 00:18:33.051 "zone_append": false, 00:18:33.051 "compare": true, 00:18:33.051 "compare_and_write": false, 00:18:33.051 "abort": true, 00:18:33.051 "seek_hole": false, 00:18:33.051 "seek_data": false, 00:18:33.051 "copy": true, 00:18:33.051 "nvme_iov_md": false 00:18:33.051 }, 00:18:33.051 "driver_specific": { 00:18:33.051 "nvme": [ 00:18:33.051 { 00:18:33.051 "pci_address": "0000:00:11.0", 00:18:33.051 "trid": { 00:18:33.051 "trtype": "PCIe", 00:18:33.051 "traddr": "0000:00:11.0" 00:18:33.051 }, 00:18:33.051 "ctrlr_data": { 00:18:33.051 "cntlid": 0, 00:18:33.051 "vendor_id": "0x1b36", 00:18:33.051 "model_number": "QEMU NVMe Ctrl", 00:18:33.051 "serial_number": "12341", 00:18:33.051 "firmware_revision": "8.0.0", 00:18:33.051 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:33.051 "oacs": { 00:18:33.051 "security": 0, 00:18:33.051 "format": 1, 00:18:33.051 "firmware": 0, 00:18:33.051 "ns_manage": 1 00:18:33.051 }, 00:18:33.051 "multi_ctrlr": false, 00:18:33.051 "ana_reporting": false 00:18:33.051 }, 00:18:33.051 "vs": { 00:18:33.051 "nvme_version": "1.4" 00:18:33.051 }, 00:18:33.051 "ns_data": { 00:18:33.051 "id": 1, 00:18:33.051 "can_share": false 00:18:33.051 } 00:18:33.051 } 00:18:33.051 ], 00:18:33.051 "mp_policy": "active_passive" 00:18:33.051 } 00:18:33.051 } 00:18:33.051 ]' 00:18:33.051 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:33.051 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:33.051 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:33.051 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:33.051 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:33.051 08:18:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:18:33.051 08:18:37 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:33.051 08:18:37 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:33.051 08:18:37 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:33.051 08:18:37 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:33.051 08:18:37 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:33.310 08:18:38 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=8250dced-3d1d-4e22-96d1-736534a37479 00:18:33.310 08:18:38 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:33.310 08:18:38 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8250dced-3d1d-4e22-96d1-736534a37479 00:18:33.570 08:18:38 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:33.829 08:18:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=aa8f03c5-c57d-467b-8ae3-c0e460a59ade 00:18:33.829 08:18:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aa8f03c5-c57d-467b-8ae3-c0e460a59ade 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:34.088 08:18:38 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.088 08:18:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.088 08:18:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:34.088 08:18:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:34.088 08:18:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:34.088 08:18:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.346 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:34.346 { 00:18:34.346 "name": "c5719f94-a578-48fd-9c9c-4c48e5c37dc0", 00:18:34.346 "aliases": [ 00:18:34.346 "lvs/nvme0n1p0" 00:18:34.346 ], 00:18:34.346 "product_name": "Logical Volume", 00:18:34.346 "block_size": 4096, 00:18:34.346 "num_blocks": 26476544, 00:18:34.346 "uuid": "c5719f94-a578-48fd-9c9c-4c48e5c37dc0", 00:18:34.346 "assigned_rate_limits": { 00:18:34.346 "rw_ios_per_sec": 0, 00:18:34.346 "rw_mbytes_per_sec": 0, 00:18:34.346 "r_mbytes_per_sec": 0, 00:18:34.346 "w_mbytes_per_sec": 0 00:18:34.346 }, 00:18:34.346 "claimed": false, 00:18:34.346 "zoned": false, 00:18:34.346 "supported_io_types": { 00:18:34.346 "read": true, 00:18:34.346 "write": true, 00:18:34.346 "unmap": true, 00:18:34.346 "flush": false, 00:18:34.346 "reset": true, 00:18:34.346 "nvme_admin": false, 00:18:34.346 "nvme_io": false, 00:18:34.346 "nvme_io_md": false, 00:18:34.346 "write_zeroes": true, 00:18:34.346 "zcopy": false, 00:18:34.346 "get_zone_info": false, 00:18:34.346 "zone_management": false, 00:18:34.346 "zone_append": false, 00:18:34.346 "compare": false, 00:18:34.346 "compare_and_write": false, 00:18:34.346 "abort": false, 00:18:34.346 "seek_hole": true, 00:18:34.346 "seek_data": true, 00:18:34.346 "copy": false, 00:18:34.346 "nvme_iov_md": false 00:18:34.346 }, 00:18:34.346 "driver_specific": { 00:18:34.346 "lvol": { 00:18:34.346 "lvol_store_uuid": "aa8f03c5-c57d-467b-8ae3-c0e460a59ade", 00:18:34.346 "base_bdev": "nvme0n1", 00:18:34.346 "thin_provision": true, 00:18:34.346 "num_allocated_clusters": 0, 00:18:34.346 "snapshot": false, 00:18:34.346 "clone": false, 00:18:34.346 "esnap_clone": false 00:18:34.346 } 00:18:34.346 } 00:18:34.346 } 00:18:34.346 ]' 00:18:34.346 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:34.346 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:34.346 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:34.346 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:34.346 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:34.346 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:34.346 08:18:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:34.346 08:18:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:34.346 08:18:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:34.914 08:18:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:34.914 08:18:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:34.914 08:18:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:34.914 { 00:18:34.914 "name": "c5719f94-a578-48fd-9c9c-4c48e5c37dc0", 00:18:34.914 "aliases": [ 00:18:34.914 "lvs/nvme0n1p0" 00:18:34.914 ], 00:18:34.914 "product_name": "Logical Volume", 00:18:34.914 "block_size": 4096, 00:18:34.914 "num_blocks": 26476544, 00:18:34.914 "uuid": "c5719f94-a578-48fd-9c9c-4c48e5c37dc0", 00:18:34.914 "assigned_rate_limits": { 00:18:34.914 "rw_ios_per_sec": 0, 00:18:34.914 "rw_mbytes_per_sec": 0, 00:18:34.914 "r_mbytes_per_sec": 0, 00:18:34.914 "w_mbytes_per_sec": 0 00:18:34.914 }, 00:18:34.914 "claimed": false, 00:18:34.914 "zoned": false, 00:18:34.914 "supported_io_types": { 00:18:34.914 "read": true, 00:18:34.914 "write": true, 00:18:34.914 "unmap": true, 00:18:34.914 "flush": false, 00:18:34.914 "reset": true, 00:18:34.914 "nvme_admin": false, 00:18:34.914 "nvme_io": false, 00:18:34.914 "nvme_io_md": false, 00:18:34.914 "write_zeroes": true, 00:18:34.914 "zcopy": false, 00:18:34.914 "get_zone_info": false, 00:18:34.914 "zone_management": false, 00:18:34.914 "zone_append": false, 00:18:34.914 "compare": false, 00:18:34.914 "compare_and_write": false, 00:18:34.914 "abort": false, 00:18:34.914 "seek_hole": true, 00:18:34.914 "seek_data": true, 00:18:34.914 "copy": false, 00:18:34.914 "nvme_iov_md": false 00:18:34.914 }, 00:18:34.914 "driver_specific": { 00:18:34.914 "lvol": { 00:18:34.914 "lvol_store_uuid": "aa8f03c5-c57d-467b-8ae3-c0e460a59ade", 00:18:34.914 "base_bdev": "nvme0n1", 00:18:34.914 "thin_provision": true, 00:18:34.914 "num_allocated_clusters": 0, 00:18:34.914 "snapshot": false, 00:18:34.914 "clone": false, 00:18:34.914 "esnap_clone": false 00:18:34.914 } 00:18:34.914 } 00:18:34.914 } 00:18:34.914 ]' 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:34.914 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:35.173 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:35.173 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:35.173 08:18:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:35.173 08:18:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:35.173 08:18:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:35.432 08:18:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:35.432 08:18:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:35.432 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:35.432 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:35.432 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:18:35.432 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:18:35.432 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c5719f94-a578-48fd-9c9c-4c48e5c37dc0 00:18:35.691 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:35.691 { 00:18:35.691 "name": "c5719f94-a578-48fd-9c9c-4c48e5c37dc0", 00:18:35.691 "aliases": [ 00:18:35.691 "lvs/nvme0n1p0" 00:18:35.691 ], 00:18:35.691 "product_name": "Logical Volume", 00:18:35.691 "block_size": 4096, 00:18:35.691 "num_blocks": 26476544, 00:18:35.691 "uuid": "c5719f94-a578-48fd-9c9c-4c48e5c37dc0", 00:18:35.691 "assigned_rate_limits": { 00:18:35.691 "rw_ios_per_sec": 0, 00:18:35.691 "rw_mbytes_per_sec": 0, 00:18:35.691 "r_mbytes_per_sec": 0, 00:18:35.691 "w_mbytes_per_sec": 0 00:18:35.691 }, 00:18:35.691 "claimed": false, 00:18:35.691 "zoned": false, 00:18:35.691 "supported_io_types": { 00:18:35.691 "read": true, 00:18:35.691 "write": true, 00:18:35.691 "unmap": true, 00:18:35.691 "flush": false, 00:18:35.691 "reset": true, 00:18:35.691 "nvme_admin": false, 00:18:35.691 "nvme_io": false, 00:18:35.691 "nvme_io_md": false, 00:18:35.691 "write_zeroes": true, 00:18:35.691 "zcopy": false, 00:18:35.691 "get_zone_info": false, 00:18:35.691 "zone_management": false, 00:18:35.691 "zone_append": false, 00:18:35.691 "compare": false, 00:18:35.691 "compare_and_write": false, 00:18:35.691 "abort": false, 00:18:35.691 "seek_hole": true, 00:18:35.691 "seek_data": true, 00:18:35.691 "copy": false, 00:18:35.691 "nvme_iov_md": false 00:18:35.691 }, 00:18:35.691 "driver_specific": { 00:18:35.691 "lvol": { 00:18:35.691 "lvol_store_uuid": "aa8f03c5-c57d-467b-8ae3-c0e460a59ade", 00:18:35.691 "base_bdev": "nvme0n1", 00:18:35.691 "thin_provision": true, 00:18:35.691 "num_allocated_clusters": 0, 00:18:35.691 "snapshot": false, 00:18:35.691 "clone": false, 00:18:35.691 "esnap_clone": false 00:18:35.691 } 00:18:35.691 } 00:18:35.691 } 00:18:35.691 ]' 00:18:35.691 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:35.691 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:18:35.691 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:35.691 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:35.691 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:35.691 08:18:40 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:18:35.691 08:18:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:35.691 08:18:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c5719f94-a578-48fd-9c9c-4c48e5c37dc0 --l2p_dram_limit 10' 00:18:35.691 08:18:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:35.691 08:18:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:35.691 08:18:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:35.691 08:18:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:35.691 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:35.691 08:18:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c5719f94-a578-48fd-9c9c-4c48e5c37dc0 --l2p_dram_limit 10 -c nvc0n1p0 00:18:35.951 [2024-11-17 08:18:40.800073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.951 [2024-11-17 08:18:40.800335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.951 [2024-11-17 08:18:40.800373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.951 [2024-11-17 08:18:40.800386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.951 [2024-11-17 08:18:40.800474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.951 [2024-11-17 08:18:40.800490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.951 [2024-11-17 08:18:40.800504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:35.951 [2024-11-17 08:18:40.800515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.951 [2024-11-17 08:18:40.800553] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.951 [2024-11-17 08:18:40.801494] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.951 [2024-11-17 08:18:40.801570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.951 [2024-11-17 08:18:40.801584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.951 [2024-11-17 08:18:40.801597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:18:35.951 [2024-11-17 08:18:40.801608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.951 [2024-11-17 08:18:40.801750] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f7d448d0-7c76-4123-9b8f-0733c52f8442 00:18:35.951 [2024-11-17 08:18:40.802706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.951 [2024-11-17 08:18:40.802761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:35.951 [2024-11-17 08:18:40.802778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:35.951 [2024-11-17 08:18:40.802790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.951 [2024-11-17 08:18:40.806967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.951 [2024-11-17 08:18:40.807032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.951 [2024-11-17 08:18:40.807047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.129 ms 00:18:35.951 [2024-11-17 08:18:40.807059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.951 [2024-11-17 08:18:40.807191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.951 [2024-11-17 08:18:40.807213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.951 [2024-11-17 08:18:40.807225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:35.951 [2024-11-17 08:18:40.807241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.952 [2024-11-17 08:18:40.807318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.952 [2024-11-17 08:18:40.807338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.952 [2024-11-17 08:18:40.807352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:35.952 [2024-11-17 08:18:40.807374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.952 [2024-11-17 08:18:40.807437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.952 [2024-11-17 08:18:40.811123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.952 [2024-11-17 08:18:40.811158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.952 [2024-11-17 08:18:40.811177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:18:35.952 [2024-11-17 08:18:40.811188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.952 [2024-11-17 08:18:40.811228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.952 [2024-11-17 08:18:40.811241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.952 [2024-11-17 08:18:40.811254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:35.952 [2024-11-17 08:18:40.811264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.952 [2024-11-17 08:18:40.811305] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:35.952 [2024-11-17 08:18:40.811518] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.952 [2024-11-17 08:18:40.811543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.952 [2024-11-17 08:18:40.811558] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:35.952 [2024-11-17 08:18:40.811576] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.952 [2024-11-17 08:18:40.811589] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.952 [2024-11-17 08:18:40.811603] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:35.952 [2024-11-17 08:18:40.811616] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.952 [2024-11-17 08:18:40.811628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.952 [2024-11-17 08:18:40.811639] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.952 [2024-11-17 08:18:40.811668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.952 [2024-11-17 08:18:40.811679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.952 [2024-11-17 08:18:40.811693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:18:35.952 [2024-11-17 08:18:40.811716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.952 [2024-11-17 08:18:40.811828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.952 [2024-11-17 08:18:40.811843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.952 [2024-11-17 08:18:40.811857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:35.952 [2024-11-17 08:18:40.811869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.952 [2024-11-17 08:18:40.811977] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.952 [2024-11-17 08:18:40.812008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.952 [2024-11-17 08:18:40.812022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.952 [2024-11-17 08:18:40.812056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.952 [2024-11-17 08:18:40.812090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.952 [2024-11-17 08:18:40.812111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.952 [2024-11-17 08:18:40.812121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:35.952 [2024-11-17 08:18:40.812133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.952 [2024-11-17 08:18:40.812143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.952 [2024-11-17 08:18:40.812155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:35.952 [2024-11-17 08:18:40.812164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.952 [2024-11-17 08:18:40.812210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.952 [2024-11-17 08:18:40.812248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.952 [2024-11-17 08:18:40.812280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.952 [2024-11-17 08:18:40.812312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.952 [2024-11-17 08:18:40.812344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.952 [2024-11-17 08:18:40.812378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.952 [2024-11-17 08:18:40.812400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.952 [2024-11-17 08:18:40.812410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:35.952 [2024-11-17 08:18:40.812421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.952 [2024-11-17 08:18:40.812431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.952 [2024-11-17 08:18:40.812443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:35.952 [2024-11-17 08:18:40.812453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.952 [2024-11-17 08:18:40.812475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:35.952 [2024-11-17 08:18:40.812487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812496] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.952 [2024-11-17 08:18:40.812510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.952 [2024-11-17 08:18:40.812520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.952 [2024-11-17 08:18:40.812545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.952 [2024-11-17 08:18:40.812559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.952 [2024-11-17 08:18:40.812569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.952 [2024-11-17 08:18:40.812581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.952 [2024-11-17 08:18:40.812591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.952 [2024-11-17 08:18:40.812603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.952 [2024-11-17 08:18:40.812617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.952 [2024-11-17 08:18:40.812635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.953 [2024-11-17 08:18:40.812648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:35.953 [2024-11-17 08:18:40.812661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:35.953 [2024-11-17 08:18:40.812672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:35.953 [2024-11-17 08:18:40.812684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:35.953 [2024-11-17 08:18:40.812695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:35.953 [2024-11-17 08:18:40.812707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:35.953 [2024-11-17 08:18:40.812718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:35.953 [2024-11-17 08:18:40.812730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:35.953 [2024-11-17 08:18:40.812741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:35.953 [2024-11-17 08:18:40.812755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:35.953 [2024-11-17 08:18:40.812766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:35.953 [2024-11-17 08:18:40.812778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:35.953 [2024-11-17 08:18:40.812789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:35.953 [2024-11-17 08:18:40.812803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:35.953 [2024-11-17 08:18:40.812814] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.953 [2024-11-17 08:18:40.812828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.953 [2024-11-17 08:18:40.812839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.953 [2024-11-17 08:18:40.812852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.953 [2024-11-17 08:18:40.812863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.953 [2024-11-17 08:18:40.812876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.953 [2024-11-17 08:18:40.812888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.953 [2024-11-17 08:18:40.812902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.953 [2024-11-17 08:18:40.812915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:18:35.953 [2024-11-17 08:18:40.812928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.953 [2024-11-17 08:18:40.812983] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:35.953 [2024-11-17 08:18:40.813010] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:38.489 [2024-11-17 08:18:43.036551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.489 [2024-11-17 08:18:43.036637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:38.489 [2024-11-17 08:18:43.036657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2223.587 ms 00:18:38.489 [2024-11-17 08:18:43.036669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.489 [2024-11-17 08:18:43.062210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.489 [2024-11-17 08:18:43.062282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.489 [2024-11-17 08:18:43.062302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.327 ms 00:18:38.489 [2024-11-17 08:18:43.062315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.489 [2024-11-17 08:18:43.062463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.489 [2024-11-17 08:18:43.062513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:38.489 [2024-11-17 08:18:43.062525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:38.489 [2024-11-17 08:18:43.062541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.489 [2024-11-17 08:18:43.094167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.489 [2024-11-17 08:18:43.094413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.489 [2024-11-17 08:18:43.094545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.560 ms 00:18:38.489 [2024-11-17 08:18:43.094598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.094675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.094793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.490 [2024-11-17 08:18:43.094842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:38.490 [2024-11-17 08:18:43.094881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.095374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.095551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.490 [2024-11-17 08:18:43.095661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:38.490 [2024-11-17 08:18:43.095701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.095824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.095845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.490 [2024-11-17 08:18:43.095857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:38.490 [2024-11-17 08:18:43.095871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.110239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.110298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.490 [2024-11-17 08:18:43.110315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.345 ms 00:18:38.490 [2024-11-17 08:18:43.110328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.120839] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:38.490 [2024-11-17 08:18:43.123481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.123646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:38.490 [2024-11-17 08:18:43.123777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.062 ms 00:18:38.490 [2024-11-17 08:18:43.123799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.188629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.188713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:38.490 [2024-11-17 08:18:43.188736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.789 ms 00:18:38.490 [2024-11-17 08:18:43.188747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.188964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.188988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:38.490 [2024-11-17 08:18:43.189004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:18:38.490 [2024-11-17 08:18:43.189014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.214147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.214186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:38.490 [2024-11-17 08:18:43.214205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.064 ms 00:18:38.490 [2024-11-17 08:18:43.214216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.238727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.238765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:38.490 [2024-11-17 08:18:43.238783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.458 ms 00:18:38.490 [2024-11-17 08:18:43.238793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.239488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.239586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:38.490 [2024-11-17 08:18:43.239613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:18:38.490 [2024-11-17 08:18:43.239629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.308966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.309015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:38.490 [2024-11-17 08:18:43.309037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.265 ms 00:18:38.490 [2024-11-17 08:18:43.309047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.334335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.334387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:38.490 [2024-11-17 08:18:43.334407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.168 ms 00:18:38.490 [2024-11-17 08:18:43.334418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.358913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.358950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:38.490 [2024-11-17 08:18:43.358968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.448 ms 00:18:38.490 [2024-11-17 08:18:43.358978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.384070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.384115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:38.490 [2024-11-17 08:18:43.384133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.048 ms 00:18:38.490 [2024-11-17 08:18:43.384143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.384193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.384209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:38.490 [2024-11-17 08:18:43.384224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:38.490 [2024-11-17 08:18:43.384234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.384323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.490 [2024-11-17 08:18:43.384342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:38.490 [2024-11-17 08:18:43.384355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:38.490 [2024-11-17 08:18:43.384364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.490 [2024-11-17 08:18:43.385512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2584.849 ms, result 0 00:18:38.490 { 00:18:38.490 "name": "ftl0", 00:18:38.490 "uuid": "f7d448d0-7c76-4123-9b8f-0733c52f8442" 00:18:38.490 } 00:18:38.490 08:18:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:38.490 08:18:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:38.749 08:18:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:38.749 08:18:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:39.009 [2024-11-17 08:18:43.936945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.937193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:39.009 [2024-11-17 08:18:43.937371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:39.009 [2024-11-17 08:18:43.937551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:43.937694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:39.009 [2024-11-17 08:18:43.940754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.940916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:39.009 [2024-11-17 08:18:43.941029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:18:39.009 [2024-11-17 08:18:43.941183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:43.941496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.941540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:39.009 [2024-11-17 08:18:43.941557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:18:39.009 [2024-11-17 08:18:43.941569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:43.944405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.944434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:39.009 [2024-11-17 08:18:43.944465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.812 ms 00:18:39.009 [2024-11-17 08:18:43.944476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:43.949871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.949898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:39.009 [2024-11-17 08:18:43.949917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.370 ms 00:18:39.009 [2024-11-17 08:18:43.949926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:43.974346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.974383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:39.009 [2024-11-17 08:18:43.974402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.350 ms 00:18:39.009 [2024-11-17 08:18:43.974412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:43.990023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.990242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:39.009 [2024-11-17 08:18:43.990276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.563 ms 00:18:39.009 [2024-11-17 08:18:43.990290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:43.990461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:43.990481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:39.009 [2024-11-17 08:18:43.990495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:39.009 [2024-11-17 08:18:43.990507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.009 [2024-11-17 08:18:44.015162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.009 [2024-11-17 08:18:44.015200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:39.009 [2024-11-17 08:18:44.015234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.596 ms 00:18:39.009 [2024-11-17 08:18:44.015245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.269 [2024-11-17 08:18:44.040672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.269 [2024-11-17 08:18:44.040708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:39.269 [2024-11-17 08:18:44.040726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.379 ms 00:18:39.269 [2024-11-17 08:18:44.040736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.269 [2024-11-17 08:18:44.065216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.269 [2024-11-17 08:18:44.065252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:39.269 [2024-11-17 08:18:44.065284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.433 ms 00:18:39.269 [2024-11-17 08:18:44.065295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.269 [2024-11-17 08:18:44.089284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.269 [2024-11-17 08:18:44.089321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:39.270 [2024-11-17 08:18:44.089338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.886 ms 00:18:39.270 [2024-11-17 08:18:44.089347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.270 [2024-11-17 08:18:44.089391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:39.270 [2024-11-17 08:18:44.089413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.089996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:39.270 [2024-11-17 08:18:44.090326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:39.271 [2024-11-17 08:18:44.090715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:39.271 [2024-11-17 08:18:44.090728] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7d448d0-7c76-4123-9b8f-0733c52f8442 00:18:39.271 [2024-11-17 08:18:44.090739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:39.271 [2024-11-17 08:18:44.090753] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:39.271 [2024-11-17 08:18:44.090766] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:39.271 [2024-11-17 08:18:44.090779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:39.271 [2024-11-17 08:18:44.090789] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:39.271 [2024-11-17 08:18:44.090802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:39.271 [2024-11-17 08:18:44.090812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:39.271 [2024-11-17 08:18:44.090823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:39.271 [2024-11-17 08:18:44.090832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:39.271 [2024-11-17 08:18:44.090845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.271 [2024-11-17 08:18:44.090856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:39.271 [2024-11-17 08:18:44.090869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:18:39.271 [2024-11-17 08:18:44.090882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.271 [2024-11-17 08:18:44.104700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.271 [2024-11-17 08:18:44.104743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:39.271 [2024-11-17 08:18:44.104777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.771 ms 00:18:39.271 [2024-11-17 08:18:44.104788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.271 [2024-11-17 08:18:44.105210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.271 [2024-11-17 08:18:44.105248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:39.271 [2024-11-17 08:18:44.105267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:18:39.271 [2024-11-17 08:18:44.105278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.271 [2024-11-17 08:18:44.147504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.271 [2024-11-17 08:18:44.147707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.271 [2024-11-17 08:18:44.147738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.271 [2024-11-17 08:18:44.147750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.271 [2024-11-17 08:18:44.147812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.271 [2024-11-17 08:18:44.147826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.271 [2024-11-17 08:18:44.147842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.271 [2024-11-17 08:18:44.147852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.271 [2024-11-17 08:18:44.147948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.271 [2024-11-17 08:18:44.147966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.271 [2024-11-17 08:18:44.147980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.271 [2024-11-17 08:18:44.147989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.271 [2024-11-17 08:18:44.148017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.271 [2024-11-17 08:18:44.148030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.271 [2024-11-17 08:18:44.148042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.271 [2024-11-17 08:18:44.148054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.271 [2024-11-17 08:18:44.226831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.271 [2024-11-17 08:18:44.226890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.271 [2024-11-17 08:18:44.226909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.271 [2024-11-17 08:18:44.226919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.531 [2024-11-17 08:18:44.292844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.531 [2024-11-17 08:18:44.292895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.531 [2024-11-17 08:18:44.292914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.531 [2024-11-17 08:18:44.292928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.531 [2024-11-17 08:18:44.293043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.531 [2024-11-17 08:18:44.293060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.532 [2024-11-17 08:18:44.293072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.532 [2024-11-17 08:18:44.293135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.532 [2024-11-17 08:18:44.293221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.532 [2024-11-17 08:18:44.293237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.532 [2024-11-17 08:18:44.293251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.532 [2024-11-17 08:18:44.293262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.532 [2024-11-17 08:18:44.293401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.532 [2024-11-17 08:18:44.293446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.532 [2024-11-17 08:18:44.293461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.532 [2024-11-17 08:18:44.293472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.532 [2024-11-17 08:18:44.293564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.532 [2024-11-17 08:18:44.293587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:39.532 [2024-11-17 08:18:44.293602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.532 [2024-11-17 08:18:44.293612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.532 [2024-11-17 08:18:44.293663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.532 [2024-11-17 08:18:44.293677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.532 [2024-11-17 08:18:44.293690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.532 [2024-11-17 08:18:44.293701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.532 [2024-11-17 08:18:44.293757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.532 [2024-11-17 08:18:44.293772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.532 [2024-11-17 08:18:44.293786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.532 [2024-11-17 08:18:44.293796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.532 [2024-11-17 08:18:44.293958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.984 ms, result 0 00:18:39.532 true 00:18:39.532 08:18:44 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 75538 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 75538 ']' 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 75538 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75538 00:18:39.532 killing process with pid 75538 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75538' 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 75538 00:18:39.532 08:18:44 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 75538 00:18:43.723 08:18:48 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:47.916 262144+0 records in 00:18:47.916 262144+0 records out 00:18:47.916 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.99729 s, 269 MB/s 00:18:47.916 08:18:52 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:49.296 08:18:54 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.296 [2024-11-17 08:18:54.254709] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:49.296 [2024-11-17 08:18:54.254849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75759 ] 00:18:49.556 [2024-11-17 08:18:54.432259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.556 [2024-11-17 08:18:54.551680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.819 [2024-11-17 08:18:54.823566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:49.819 [2024-11-17 08:18:54.823667] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:50.083 [2024-11-17 08:18:54.985463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.083 [2024-11-17 08:18:54.985508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:50.083 [2024-11-17 08:18:54.985535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:50.083 [2024-11-17 08:18:54.985545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:54.985604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:54.985620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:50.084 [2024-11-17 08:18:54.985638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:50.084 [2024-11-17 08:18:54.985646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:54.985672] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:50.084 [2024-11-17 08:18:54.986437] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:50.084 [2024-11-17 08:18:54.986461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:54.986471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:50.084 [2024-11-17 08:18:54.986496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:18:50.084 [2024-11-17 08:18:54.986505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:54.987709] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:50.084 [2024-11-17 08:18:55.001651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.001688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:50.084 [2024-11-17 08:18:55.001719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.957 ms 00:18:50.084 [2024-11-17 08:18:55.001729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.001810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.001827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:50.084 [2024-11-17 08:18:55.001838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:50.084 [2024-11-17 08:18:55.001847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.006339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.006378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:50.084 [2024-11-17 08:18:55.006408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.405 ms 00:18:50.084 [2024-11-17 08:18:55.006417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.006534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.006552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:50.084 [2024-11-17 08:18:55.006562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:50.084 [2024-11-17 08:18:55.006571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.006622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.006637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:50.084 [2024-11-17 08:18:55.006648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:50.084 [2024-11-17 08:18:55.006657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.006684] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:50.084 [2024-11-17 08:18:55.010553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.010601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:50.084 [2024-11-17 08:18:55.010631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:18:50.084 [2024-11-17 08:18:55.010652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.010686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.010698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:50.084 [2024-11-17 08:18:55.010708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:50.084 [2024-11-17 08:18:55.010717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.010758] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:50.084 [2024-11-17 08:18:55.010790] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:50.084 [2024-11-17 08:18:55.010827] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:50.084 [2024-11-17 08:18:55.010846] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:50.084 [2024-11-17 08:18:55.010937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:50.084 [2024-11-17 08:18:55.010950] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:50.084 [2024-11-17 08:18:55.010962] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:50.084 [2024-11-17 08:18:55.010974] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:50.084 [2024-11-17 08:18:55.010985] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:50.084 [2024-11-17 08:18:55.010994] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:50.084 [2024-11-17 08:18:55.011002] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:50.084 [2024-11-17 08:18:55.011011] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:50.084 [2024-11-17 08:18:55.011019] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:50.084 [2024-11-17 08:18:55.011032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.011041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:50.084 [2024-11-17 08:18:55.011051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:18:50.084 [2024-11-17 08:18:55.011060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.011171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.084 [2024-11-17 08:18:55.011186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:50.084 [2024-11-17 08:18:55.011197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:50.084 [2024-11-17 08:18:55.011206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.084 [2024-11-17 08:18:55.011311] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:50.084 [2024-11-17 08:18:55.011332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:50.084 [2024-11-17 08:18:55.011342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:50.084 [2024-11-17 08:18:55.011351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:50.084 [2024-11-17 08:18:55.011397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:50.084 [2024-11-17 08:18:55.011433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:50.084 [2024-11-17 08:18:55.011457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:50.084 [2024-11-17 08:18:55.011476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:50.084 [2024-11-17 08:18:55.011485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:50.084 [2024-11-17 08:18:55.011494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:50.084 [2024-11-17 08:18:55.011505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:50.084 [2024-11-17 08:18:55.011515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:50.084 [2024-11-17 08:18:55.011536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:50.084 [2024-11-17 08:18:55.011554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:50.084 [2024-11-17 08:18:55.011563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:50.084 [2024-11-17 08:18:55.011581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.084 [2024-11-17 08:18:55.011599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:50.084 [2024-11-17 08:18:55.011608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.084 [2024-11-17 08:18:55.011625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:50.084 [2024-11-17 08:18:55.011634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.084 [2024-11-17 08:18:55.011651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:50.084 [2024-11-17 08:18:55.011660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.084 [2024-11-17 08:18:55.011679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:50.084 [2024-11-17 08:18:55.011702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:50.084 [2024-11-17 08:18:55.011733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:50.084 [2024-11-17 08:18:55.011742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:50.084 [2024-11-17 08:18:55.011750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:50.084 [2024-11-17 08:18:55.011758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:50.084 [2024-11-17 08:18:55.011782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:50.084 [2024-11-17 08:18:55.011790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.084 [2024-11-17 08:18:55.011798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:50.085 [2024-11-17 08:18:55.011806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:50.085 [2024-11-17 08:18:55.011815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.085 [2024-11-17 08:18:55.011824] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:50.085 [2024-11-17 08:18:55.011833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:50.085 [2024-11-17 08:18:55.011842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:50.085 [2024-11-17 08:18:55.011850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.085 [2024-11-17 08:18:55.011860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:50.085 [2024-11-17 08:18:55.011868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:50.085 [2024-11-17 08:18:55.011876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:50.085 [2024-11-17 08:18:55.011885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:50.085 [2024-11-17 08:18:55.011893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:50.085 [2024-11-17 08:18:55.011902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:50.085 [2024-11-17 08:18:55.011912] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:50.085 [2024-11-17 08:18:55.011922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:50.085 [2024-11-17 08:18:55.011932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:50.085 [2024-11-17 08:18:55.011941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:50.085 [2024-11-17 08:18:55.011950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:50.085 [2024-11-17 08:18:55.011959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:50.085 [2024-11-17 08:18:55.011968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:50.085 [2024-11-17 08:18:55.011977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:50.085 [2024-11-17 08:18:55.011986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:50.085 [2024-11-17 08:18:55.011996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:50.085 [2024-11-17 08:18:55.012005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:50.085 [2024-11-17 08:18:55.012014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:50.085 [2024-11-17 08:18:55.012023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:50.085 [2024-11-17 08:18:55.012032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:50.085 [2024-11-17 08:18:55.012041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:50.085 [2024-11-17 08:18:55.012051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:50.085 [2024-11-17 08:18:55.012060] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:50.085 [2024-11-17 08:18:55.012073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:50.085 [2024-11-17 08:18:55.012084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:50.085 [2024-11-17 08:18:55.012108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:50.085 [2024-11-17 08:18:55.012117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:50.085 [2024-11-17 08:18:55.012127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:50.085 [2024-11-17 08:18:55.012137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.085 [2024-11-17 08:18:55.012147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:50.085 [2024-11-17 08:18:55.012158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:18:50.085 [2024-11-17 08:18:55.012167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.085 [2024-11-17 08:18:55.040659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.085 [2024-11-17 08:18:55.040710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:50.085 [2024-11-17 08:18:55.040727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.397 ms 00:18:50.085 [2024-11-17 08:18:55.040747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.085 [2024-11-17 08:18:55.040836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.085 [2024-11-17 08:18:55.040849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:50.085 [2024-11-17 08:18:55.040859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:50.085 [2024-11-17 08:18:55.040867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.085 [2024-11-17 08:18:55.090722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.085 [2024-11-17 08:18:55.090945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:50.085 [2024-11-17 08:18:55.090974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.770 ms 00:18:50.085 [2024-11-17 08:18:55.090985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.085 [2024-11-17 08:18:55.091043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.085 [2024-11-17 08:18:55.091059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:50.085 [2024-11-17 08:18:55.091122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:50.085 [2024-11-17 08:18:55.091134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.085 [2024-11-17 08:18:55.091659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.085 [2024-11-17 08:18:55.091694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:50.085 [2024-11-17 08:18:55.091720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:18:50.085 [2024-11-17 08:18:55.091744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.085 [2024-11-17 08:18:55.091934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.085 [2024-11-17 08:18:55.091953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:50.085 [2024-11-17 08:18:55.091975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:18:50.085 [2024-11-17 08:18:55.091985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.106965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.107005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:50.345 [2024-11-17 08:18:55.107020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.941 ms 00:18:50.345 [2024-11-17 08:18:55.107029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.120297] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:50.345 [2024-11-17 08:18:55.120336] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:50.345 [2024-11-17 08:18:55.120352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.120361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:50.345 [2024-11-17 08:18:55.120371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.154 ms 00:18:50.345 [2024-11-17 08:18:55.120380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.143431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.143478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:50.345 [2024-11-17 08:18:55.143493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.011 ms 00:18:50.345 [2024-11-17 08:18:55.143502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.155852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.155901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:50.345 [2024-11-17 08:18:55.155915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.309 ms 00:18:50.345 [2024-11-17 08:18:55.155924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.168412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.168449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:50.345 [2024-11-17 08:18:55.168462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.450 ms 00:18:50.345 [2024-11-17 08:18:55.168471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.169202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.169229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:50.345 [2024-11-17 08:18:55.169242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:18:50.345 [2024-11-17 08:18:55.169262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.227533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.227599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:50.345 [2024-11-17 08:18:55.227633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.247 ms 00:18:50.345 [2024-11-17 08:18:55.227656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.237722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:50.345 [2024-11-17 08:18:55.239845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.239877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:50.345 [2024-11-17 08:18:55.239891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.111 ms 00:18:50.345 [2024-11-17 08:18:55.239900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.240022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.240040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:50.345 [2024-11-17 08:18:55.240051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:50.345 [2024-11-17 08:18:55.240059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.240197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.240231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:50.345 [2024-11-17 08:18:55.240242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:50.345 [2024-11-17 08:18:55.240251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.240280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.240293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:50.345 [2024-11-17 08:18:55.240304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:50.345 [2024-11-17 08:18:55.240314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.240360] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:50.345 [2024-11-17 08:18:55.240382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.240392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:50.345 [2024-11-17 08:18:55.240402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:50.345 [2024-11-17 08:18:55.240412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.264690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.264729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:50.345 [2024-11-17 08:18:55.264743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.257 ms 00:18:50.345 [2024-11-17 08:18:55.264752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.264839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.345 [2024-11-17 08:18:55.264854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:50.345 [2024-11-17 08:18:55.264865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:50.345 [2024-11-17 08:18:55.264873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.345 [2024-11-17 08:18:55.266163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.071 ms, result 0 00:18:51.283  [2024-11-17T08:18:57.675Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-17T08:18:58.612Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-17T08:18:59.549Z] Copying: 68/1024 [MB] (22 MBps) [2024-11-17T08:19:00.514Z] Copying: 92/1024 [MB] (23 MBps) [2024-11-17T08:19:01.452Z] Copying: 115/1024 [MB] (23 MBps) [2024-11-17T08:19:02.390Z] Copying: 140/1024 [MB] (24 MBps) [2024-11-17T08:19:03.327Z] Copying: 164/1024 [MB] (24 MBps) [2024-11-17T08:19:04.707Z] Copying: 189/1024 [MB] (24 MBps) [2024-11-17T08:19:05.644Z] Copying: 213/1024 [MB] (24 MBps) [2024-11-17T08:19:06.580Z] Copying: 235/1024 [MB] (22 MBps) [2024-11-17T08:19:07.516Z] Copying: 259/1024 [MB] (24 MBps) [2024-11-17T08:19:08.451Z] Copying: 283/1024 [MB] (24 MBps) [2024-11-17T08:19:09.387Z] Copying: 307/1024 [MB] (23 MBps) [2024-11-17T08:19:10.323Z] Copying: 331/1024 [MB] (24 MBps) [2024-11-17T08:19:11.701Z] Copying: 355/1024 [MB] (23 MBps) [2024-11-17T08:19:12.638Z] Copying: 379/1024 [MB] (23 MBps) [2024-11-17T08:19:13.574Z] Copying: 404/1024 [MB] (24 MBps) [2024-11-17T08:19:14.510Z] Copying: 428/1024 [MB] (24 MBps) [2024-11-17T08:19:15.448Z] Copying: 452/1024 [MB] (24 MBps) [2024-11-17T08:19:16.385Z] Copying: 476/1024 [MB] (24 MBps) [2024-11-17T08:19:17.322Z] Copying: 500/1024 [MB] (24 MBps) [2024-11-17T08:19:18.701Z] Copying: 525/1024 [MB] (24 MBps) [2024-11-17T08:19:19.639Z] Copying: 549/1024 [MB] (23 MBps) [2024-11-17T08:19:20.576Z] Copying: 572/1024 [MB] (23 MBps) [2024-11-17T08:19:21.514Z] Copying: 597/1024 [MB] (24 MBps) [2024-11-17T08:19:22.452Z] Copying: 621/1024 [MB] (24 MBps) [2024-11-17T08:19:23.389Z] Copying: 645/1024 [MB] (24 MBps) [2024-11-17T08:19:24.327Z] Copying: 669/1024 [MB] (23 MBps) [2024-11-17T08:19:25.706Z] Copying: 693/1024 [MB] (24 MBps) [2024-11-17T08:19:26.643Z] Copying: 717/1024 [MB] (24 MBps) [2024-11-17T08:19:27.583Z] Copying: 741/1024 [MB] (24 MBps) [2024-11-17T08:19:28.520Z] Copying: 766/1024 [MB] (24 MBps) [2024-11-17T08:19:29.509Z] Copying: 789/1024 [MB] (23 MBps) [2024-11-17T08:19:30.448Z] Copying: 813/1024 [MB] (23 MBps) [2024-11-17T08:19:31.385Z] Copying: 837/1024 [MB] (23 MBps) [2024-11-17T08:19:32.323Z] Copying: 861/1024 [MB] (24 MBps) [2024-11-17T08:19:33.341Z] Copying: 885/1024 [MB] (24 MBps) [2024-11-17T08:19:34.719Z] Copying: 909/1024 [MB] (24 MBps) [2024-11-17T08:19:35.287Z] Copying: 934/1024 [MB] (24 MBps) [2024-11-17T08:19:36.668Z] Copying: 958/1024 [MB] (23 MBps) [2024-11-17T08:19:37.607Z] Copying: 982/1024 [MB] (23 MBps) [2024-11-17T08:19:38.176Z] Copying: 1006/1024 [MB] (23 MBps) [2024-11-17T08:19:38.176Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-17 08:19:38.031484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.164 [2024-11-17 08:19:38.031544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:33.164 [2024-11-17 08:19:38.031572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:33.165 [2024-11-17 08:19:38.031591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.031625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:33.165 [2024-11-17 08:19:38.034524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.034691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:33.165 [2024-11-17 08:19:38.034715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:19:33.165 [2024-11-17 08:19:38.034733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.036367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.036466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:33.165 [2024-11-17 08:19:38.036496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:19:33.165 [2024-11-17 08:19:38.036505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.050552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.050587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:33.165 [2024-11-17 08:19:38.050617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.028 ms 00:19:33.165 [2024-11-17 08:19:38.050627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.056076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.056131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:33.165 [2024-11-17 08:19:38.056160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.407 ms 00:19:33.165 [2024-11-17 08:19:38.056169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.080483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.080518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:33.165 [2024-11-17 08:19:38.080532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.260 ms 00:19:33.165 [2024-11-17 08:19:38.080541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.095249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.095425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:33.165 [2024-11-17 08:19:38.095466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.672 ms 00:19:33.165 [2024-11-17 08:19:38.095476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.095633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.095662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:33.165 [2024-11-17 08:19:38.095675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:33.165 [2024-11-17 08:19:38.095684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.120472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.120507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:33.165 [2024-11-17 08:19:38.120520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.769 ms 00:19:33.165 [2024-11-17 08:19:38.120529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.144624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.144658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:33.165 [2024-11-17 08:19:38.144685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.060 ms 00:19:33.165 [2024-11-17 08:19:38.144694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.165 [2024-11-17 08:19:38.168477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.165 [2024-11-17 08:19:38.168511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:33.165 [2024-11-17 08:19:38.168525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.748 ms 00:19:33.165 [2024-11-17 08:19:38.168533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.426 [2024-11-17 08:19:38.193716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.426 [2024-11-17 08:19:38.193750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:33.426 [2024-11-17 08:19:38.193763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.125 ms 00:19:33.426 [2024-11-17 08:19:38.193772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.426 [2024-11-17 08:19:38.193807] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:33.426 [2024-11-17 08:19:38.193826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.193998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:33.426 [2024-11-17 08:19:38.194641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:33.427 [2024-11-17 08:19:38.194983] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:33.427 [2024-11-17 08:19:38.195000] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7d448d0-7c76-4123-9b8f-0733c52f8442 00:19:33.427 [2024-11-17 08:19:38.195015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:33.427 [2024-11-17 08:19:38.195025] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:33.427 [2024-11-17 08:19:38.195034] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:33.427 [2024-11-17 08:19:38.195043] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:33.427 [2024-11-17 08:19:38.195052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:33.427 [2024-11-17 08:19:38.195061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:33.427 [2024-11-17 08:19:38.195070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:33.427 [2024-11-17 08:19:38.195116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:33.427 [2024-11-17 08:19:38.195126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:33.427 [2024-11-17 08:19:38.195136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.427 [2024-11-17 08:19:38.195146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:33.427 [2024-11-17 08:19:38.195156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:19:33.427 [2024-11-17 08:19:38.195166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.208182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.427 [2024-11-17 08:19:38.208214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:33.427 [2024-11-17 08:19:38.208244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.989 ms 00:19:33.427 [2024-11-17 08:19:38.208253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.208620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.427 [2024-11-17 08:19:38.208633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:33.427 [2024-11-17 08:19:38.208642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:19:33.427 [2024-11-17 08:19:38.208657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.241481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.241520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.427 [2024-11-17 08:19:38.241533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.241542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.241601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.241618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.427 [2024-11-17 08:19:38.241627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.241642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.241720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.241736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.427 [2024-11-17 08:19:38.241745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.241754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.241771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.241781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.427 [2024-11-17 08:19:38.241789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.241797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.320151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.320204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.427 [2024-11-17 08:19:38.320220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.320229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.384594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.384644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.427 [2024-11-17 08:19:38.384676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.384685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.384762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.384776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.427 [2024-11-17 08:19:38.384786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.384795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.384856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.384870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.427 [2024-11-17 08:19:38.384880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.384888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.385004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.385020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.427 [2024-11-17 08:19:38.385030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.385038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.385153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.385171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:33.427 [2024-11-17 08:19:38.385183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.385192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.385231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.385250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.427 [2024-11-17 08:19:38.385260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.427 [2024-11-17 08:19:38.385269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.427 [2024-11-17 08:19:38.385313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.427 [2024-11-17 08:19:38.385327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.427 [2024-11-17 08:19:38.385336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.428 [2024-11-17 08:19:38.385346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.428 [2024-11-17 08:19:38.385537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.985 ms, result 0 00:19:34.374 00:19:34.374 00:19:34.374 08:19:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:34.633 [2024-11-17 08:19:39.454656] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:34.633 [2024-11-17 08:19:39.454825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76211 ] 00:19:34.633 [2024-11-17 08:19:39.632504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.892 [2024-11-17 08:19:39.714206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.152 [2024-11-17 08:19:39.982716] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.152 [2024-11-17 08:19:39.982795] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.152 [2024-11-17 08:19:40.139606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.139808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.152 [2024-11-17 08:19:40.139846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.152 [2024-11-17 08:19:40.139858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.139921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.139937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.152 [2024-11-17 08:19:40.139950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:35.152 [2024-11-17 08:19:40.139959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.139987] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.152 [2024-11-17 08:19:40.140814] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.152 [2024-11-17 08:19:40.140843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.140853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.152 [2024-11-17 08:19:40.140863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:19:35.152 [2024-11-17 08:19:40.140872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.141945] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.152 [2024-11-17 08:19:40.154603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.154767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.152 [2024-11-17 08:19:40.154791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.673 ms 00:19:35.152 [2024-11-17 08:19:40.154803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.154868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.154885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.152 [2024-11-17 08:19:40.154896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:35.152 [2024-11-17 08:19:40.154905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.159275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.159311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.152 [2024-11-17 08:19:40.159340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:19:35.152 [2024-11-17 08:19:40.159349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.159472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.159488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.152 [2024-11-17 08:19:40.159499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:35.152 [2024-11-17 08:19:40.159508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.159570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.152 [2024-11-17 08:19:40.159586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.152 [2024-11-17 08:19:40.159596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:35.152 [2024-11-17 08:19:40.159604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.152 [2024-11-17 08:19:40.159632] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.414 [2024-11-17 08:19:40.163547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.414 [2024-11-17 08:19:40.163584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.414 [2024-11-17 08:19:40.163598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.922 ms 00:19:35.414 [2024-11-17 08:19:40.163613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.414 [2024-11-17 08:19:40.163649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.414 [2024-11-17 08:19:40.163663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.414 [2024-11-17 08:19:40.163688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:35.414 [2024-11-17 08:19:40.163712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.414 [2024-11-17 08:19:40.163782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.414 [2024-11-17 08:19:40.163808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.414 [2024-11-17 08:19:40.163846] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.414 [2024-11-17 08:19:40.163879] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.414 [2024-11-17 08:19:40.163985] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.414 [2024-11-17 08:19:40.164004] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.414 [2024-11-17 08:19:40.164017] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.414 [2024-11-17 08:19:40.164028] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164038] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164047] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:35.414 [2024-11-17 08:19:40.164055] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.414 [2024-11-17 08:19:40.164063] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.414 [2024-11-17 08:19:40.164071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.414 [2024-11-17 08:19:40.164129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.414 [2024-11-17 08:19:40.164139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.414 [2024-11-17 08:19:40.164149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:19:35.414 [2024-11-17 08:19:40.164157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.414 [2024-11-17 08:19:40.164242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.414 [2024-11-17 08:19:40.164255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.414 [2024-11-17 08:19:40.164264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:35.414 [2024-11-17 08:19:40.164273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.414 [2024-11-17 08:19:40.164371] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.414 [2024-11-17 08:19:40.164392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.414 [2024-11-17 08:19:40.164402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.414 [2024-11-17 08:19:40.164428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.414 [2024-11-17 08:19:40.164469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.414 [2024-11-17 08:19:40.164484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.414 [2024-11-17 08:19:40.164492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:35.414 [2024-11-17 08:19:40.164500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.414 [2024-11-17 08:19:40.164507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.414 [2024-11-17 08:19:40.164516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:35.414 [2024-11-17 08:19:40.164533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.414 [2024-11-17 08:19:40.164549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.414 [2024-11-17 08:19:40.164572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.414 [2024-11-17 08:19:40.164595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.414 [2024-11-17 08:19:40.164617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.414 [2024-11-17 08:19:40.164639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.414 [2024-11-17 08:19:40.164661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.414 [2024-11-17 08:19:40.164692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.414 [2024-11-17 08:19:40.164700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:35.414 [2024-11-17 08:19:40.164707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.414 [2024-11-17 08:19:40.164715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.414 [2024-11-17 08:19:40.164723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:35.414 [2024-11-17 08:19:40.164747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.414 [2024-11-17 08:19:40.164763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:35.414 [2024-11-17 08:19:40.164772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.414 [2024-11-17 08:19:40.164789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.414 [2024-11-17 08:19:40.164798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.414 [2024-11-17 08:19:40.164831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.414 [2024-11-17 08:19:40.164839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.414 [2024-11-17 08:19:40.164848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.414 [2024-11-17 08:19:40.164872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.414 [2024-11-17 08:19:40.164881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.414 [2024-11-17 08:19:40.164889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.414 [2024-11-17 08:19:40.164900] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.414 [2024-11-17 08:19:40.164911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.414 [2024-11-17 08:19:40.164922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:35.415 [2024-11-17 08:19:40.164932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:35.415 [2024-11-17 08:19:40.164942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:35.415 [2024-11-17 08:19:40.164952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:35.415 [2024-11-17 08:19:40.164961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:35.415 [2024-11-17 08:19:40.164971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:35.415 [2024-11-17 08:19:40.164980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:35.415 [2024-11-17 08:19:40.164990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:35.415 [2024-11-17 08:19:40.164999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:35.415 [2024-11-17 08:19:40.165009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:35.415 [2024-11-17 08:19:40.165018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:35.415 [2024-11-17 08:19:40.165042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:35.415 [2024-11-17 08:19:40.165051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:35.415 [2024-11-17 08:19:40.165061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:35.415 [2024-11-17 08:19:40.165071] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.415 [2024-11-17 08:19:40.165086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.415 [2024-11-17 08:19:40.165100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.415 [2024-11-17 08:19:40.165111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.415 [2024-11-17 08:19:40.165120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.415 [2024-11-17 08:19:40.165144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.415 [2024-11-17 08:19:40.165170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.165179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.415 [2024-11-17 08:19:40.165188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:19:35.415 [2024-11-17 08:19:40.165197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.191845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.192081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.415 [2024-11-17 08:19:40.192234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.156 ms 00:19:35.415 [2024-11-17 08:19:40.192280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.192492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.192559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.415 [2024-11-17 08:19:40.192958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:35.415 [2024-11-17 08:19:40.193005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.240287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.240490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.415 [2024-11-17 08:19:40.240641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.115 ms 00:19:35.415 [2024-11-17 08:19:40.240689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.240763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.240871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.415 [2024-11-17 08:19:40.240918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.415 [2024-11-17 08:19:40.240959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.241460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.241493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.415 [2024-11-17 08:19:40.241507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:35.415 [2024-11-17 08:19:40.241517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.241668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.241685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.415 [2024-11-17 08:19:40.241695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:35.415 [2024-11-17 08:19:40.241711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.255007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.255194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.415 [2024-11-17 08:19:40.255226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.274 ms 00:19:35.415 [2024-11-17 08:19:40.255237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.268690] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:35.415 [2024-11-17 08:19:40.268726] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.415 [2024-11-17 08:19:40.268757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.268767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.415 [2024-11-17 08:19:40.268777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.350 ms 00:19:35.415 [2024-11-17 08:19:40.268786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.292571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.292612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.415 [2024-11-17 08:19:40.292626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.739 ms 00:19:35.415 [2024-11-17 08:19:40.292635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.305135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.305172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.415 [2024-11-17 08:19:40.305186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.460 ms 00:19:35.415 [2024-11-17 08:19:40.305194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.317553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.317588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.415 [2024-11-17 08:19:40.317617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.321 ms 00:19:35.415 [2024-11-17 08:19:40.317625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.318260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.318315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.415 [2024-11-17 08:19:40.318328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:19:35.415 [2024-11-17 08:19:40.318342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.375854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.375918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.415 [2024-11-17 08:19:40.375941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.491 ms 00:19:35.415 [2024-11-17 08:19:40.375951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.385832] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:35.415 [2024-11-17 08:19:40.387842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.387873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.415 [2024-11-17 08:19:40.387886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.816 ms 00:19:35.415 [2024-11-17 08:19:40.387895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.388000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.388016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.415 [2024-11-17 08:19:40.388027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:35.415 [2024-11-17 08:19:40.388039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.388161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.388179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.415 [2024-11-17 08:19:40.388190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:35.415 [2024-11-17 08:19:40.388199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.388226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.388238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.415 [2024-11-17 08:19:40.388248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.415 [2024-11-17 08:19:40.388257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.388294] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.415 [2024-11-17 08:19:40.388311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.388321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.415 [2024-11-17 08:19:40.388330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:35.415 [2024-11-17 08:19:40.388339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.412482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.415 [2024-11-17 08:19:40.412519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.415 [2024-11-17 08:19:40.412533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.124 ms 00:19:35.415 [2024-11-17 08:19:40.412547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.415 [2024-11-17 08:19:40.412618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.416 [2024-11-17 08:19:40.412633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.416 [2024-11-17 08:19:40.412643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:35.416 [2024-11-17 08:19:40.412651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.416 [2024-11-17 08:19:40.414002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.858 ms, result 0 00:19:36.794  [2024-11-17T08:19:42.744Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-17T08:19:43.683Z] Copying: 45/1024 [MB] (23 MBps) [2024-11-17T08:19:44.622Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-17T08:19:46.001Z] Copying: 92/1024 [MB] (23 MBps) [2024-11-17T08:19:46.939Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-17T08:19:47.875Z] Copying: 139/1024 [MB] (23 MBps) [2024-11-17T08:19:48.813Z] Copying: 163/1024 [MB] (23 MBps) [2024-11-17T08:19:49.751Z] Copying: 186/1024 [MB] (23 MBps) [2024-11-17T08:19:50.688Z] Copying: 209/1024 [MB] (23 MBps) [2024-11-17T08:19:51.625Z] Copying: 232/1024 [MB] (22 MBps) [2024-11-17T08:19:53.004Z] Copying: 255/1024 [MB] (22 MBps) [2024-11-17T08:19:53.940Z] Copying: 277/1024 [MB] (22 MBps) [2024-11-17T08:19:54.875Z] Copying: 299/1024 [MB] (22 MBps) [2024-11-17T08:19:55.814Z] Copying: 322/1024 [MB] (22 MBps) [2024-11-17T08:19:56.751Z] Copying: 345/1024 [MB] (22 MBps) [2024-11-17T08:19:57.687Z] Copying: 368/1024 [MB] (23 MBps) [2024-11-17T08:19:58.624Z] Copying: 391/1024 [MB] (23 MBps) [2024-11-17T08:19:59.997Z] Copying: 414/1024 [MB] (22 MBps) [2024-11-17T08:20:00.932Z] Copying: 437/1024 [MB] (22 MBps) [2024-11-17T08:20:01.870Z] Copying: 460/1024 [MB] (22 MBps) [2024-11-17T08:20:02.806Z] Copying: 482/1024 [MB] (22 MBps) [2024-11-17T08:20:03.744Z] Copying: 505/1024 [MB] (22 MBps) [2024-11-17T08:20:04.682Z] Copying: 528/1024 [MB] (23 MBps) [2024-11-17T08:20:05.618Z] Copying: 552/1024 [MB] (23 MBps) [2024-11-17T08:20:06.998Z] Copying: 576/1024 [MB] (23 MBps) [2024-11-17T08:20:07.936Z] Copying: 598/1024 [MB] (22 MBps) [2024-11-17T08:20:08.874Z] Copying: 622/1024 [MB] (23 MBps) [2024-11-17T08:20:09.810Z] Copying: 645/1024 [MB] (23 MBps) [2024-11-17T08:20:10.747Z] Copying: 668/1024 [MB] (22 MBps) [2024-11-17T08:20:11.684Z] Copying: 691/1024 [MB] (22 MBps) [2024-11-17T08:20:12.623Z] Copying: 713/1024 [MB] (22 MBps) [2024-11-17T08:20:14.002Z] Copying: 736/1024 [MB] (22 MBps) [2024-11-17T08:20:14.940Z] Copying: 759/1024 [MB] (23 MBps) [2024-11-17T08:20:15.877Z] Copying: 782/1024 [MB] (23 MBps) [2024-11-17T08:20:16.815Z] Copying: 805/1024 [MB] (23 MBps) [2024-11-17T08:20:17.752Z] Copying: 828/1024 [MB] (23 MBps) [2024-11-17T08:20:18.690Z] Copying: 851/1024 [MB] (23 MBps) [2024-11-17T08:20:19.627Z] Copying: 875/1024 [MB] (23 MBps) [2024-11-17T08:20:20.600Z] Copying: 898/1024 [MB] (23 MBps) [2024-11-17T08:20:22.014Z] Copying: 921/1024 [MB] (23 MBps) [2024-11-17T08:20:22.952Z] Copying: 944/1024 [MB] (23 MBps) [2024-11-17T08:20:23.898Z] Copying: 967/1024 [MB] (23 MBps) [2024-11-17T08:20:24.833Z] Copying: 991/1024 [MB] (23 MBps) [2024-11-17T08:20:25.092Z] Copying: 1014/1024 [MB] (22 MBps) [2024-11-17T08:20:26.029Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-17 08:20:25.997686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.017 [2024-11-17 08:20:25.997764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.017 [2024-11-17 08:20:25.997783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:21.017 [2024-11-17 08:20:25.997795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.017 [2024-11-17 08:20:25.997823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.017 [2024-11-17 08:20:26.001195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.017 [2024-11-17 08:20:26.001236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.017 [2024-11-17 08:20:26.001258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:20:21.017 [2024-11-17 08:20:26.001269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.017 [2024-11-17 08:20:26.001483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.017 [2024-11-17 08:20:26.001500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.017 [2024-11-17 08:20:26.001511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:20:21.017 [2024-11-17 08:20:26.001521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.017 [2024-11-17 08:20:26.004779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.017 [2024-11-17 08:20:26.004963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.017 [2024-11-17 08:20:26.005133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.240 ms 00:20:21.017 [2024-11-17 08:20:26.005326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.017 [2024-11-17 08:20:26.011138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.017 [2024-11-17 08:20:26.011335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.017 [2024-11-17 08:20:26.011495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.703 ms 00:20:21.017 [2024-11-17 08:20:26.011657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.277 [2024-11-17 08:20:26.043062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.277 [2024-11-17 08:20:26.043287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.277 [2024-11-17 08:20:26.043477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.125 ms 00:20:21.277 [2024-11-17 08:20:26.043567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.277 [2024-11-17 08:20:26.058811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.277 [2024-11-17 08:20:26.058997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.277 [2024-11-17 08:20:26.059186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.059 ms 00:20:21.277 [2024-11-17 08:20:26.059333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.277 [2024-11-17 08:20:26.059663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.277 [2024-11-17 08:20:26.059797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.277 [2024-11-17 08:20:26.059927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:20:21.277 [2024-11-17 08:20:26.059964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.277 [2024-11-17 08:20:26.085249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.277 [2024-11-17 08:20:26.085285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:21.277 [2024-11-17 08:20:26.085315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.248 ms 00:20:21.277 [2024-11-17 08:20:26.085324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.277 [2024-11-17 08:20:26.110600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.277 [2024-11-17 08:20:26.110645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:21.277 [2024-11-17 08:20:26.110659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.238 ms 00:20:21.277 [2024-11-17 08:20:26.110667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.278 [2024-11-17 08:20:26.134341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.278 [2024-11-17 08:20:26.134376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.278 [2024-11-17 08:20:26.134389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.638 ms 00:20:21.278 [2024-11-17 08:20:26.134398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.278 [2024-11-17 08:20:26.158233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.278 [2024-11-17 08:20:26.158267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.278 [2024-11-17 08:20:26.158281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.778 ms 00:20:21.278 [2024-11-17 08:20:26.158289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.278 [2024-11-17 08:20:26.158324] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.278 [2024-11-17 08:20:26.158343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.158996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.159005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.159013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.159022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.159031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.278 [2024-11-17 08:20:26.159039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.279 [2024-11-17 08:20:26.159279] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.279 [2024-11-17 08:20:26.159292] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7d448d0-7c76-4123-9b8f-0733c52f8442 00:20:21.279 [2024-11-17 08:20:26.159301] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.279 [2024-11-17 08:20:26.159310] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.279 [2024-11-17 08:20:26.159317] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.279 [2024-11-17 08:20:26.159326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.279 [2024-11-17 08:20:26.159334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.279 [2024-11-17 08:20:26.159343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.279 [2024-11-17 08:20:26.159361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.279 [2024-11-17 08:20:26.159369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.279 [2024-11-17 08:20:26.159377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.279 [2024-11-17 08:20:26.159395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.279 [2024-11-17 08:20:26.159420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.279 [2024-11-17 08:20:26.159430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:20:21.279 [2024-11-17 08:20:26.159439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.279 [2024-11-17 08:20:26.172885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.279 [2024-11-17 08:20:26.172919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.279 [2024-11-17 08:20:26.172948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.406 ms 00:20:21.279 [2024-11-17 08:20:26.172957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.279 [2024-11-17 08:20:26.173356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.279 [2024-11-17 08:20:26.173372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.279 [2024-11-17 08:20:26.173383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:20:21.279 [2024-11-17 08:20:26.173415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.279 [2024-11-17 08:20:26.206229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.279 [2024-11-17 08:20:26.206431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.279 [2024-11-17 08:20:26.206480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.279 [2024-11-17 08:20:26.206501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.279 [2024-11-17 08:20:26.206581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.279 [2024-11-17 08:20:26.206602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.279 [2024-11-17 08:20:26.206620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.279 [2024-11-17 08:20:26.206648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.279 [2024-11-17 08:20:26.206784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.279 [2024-11-17 08:20:26.206802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.279 [2024-11-17 08:20:26.206813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.279 [2024-11-17 08:20:26.206822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.279 [2024-11-17 08:20:26.206841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.279 [2024-11-17 08:20:26.206852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.279 [2024-11-17 08:20:26.206861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.279 [2024-11-17 08:20:26.206870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.279 [2024-11-17 08:20:26.285333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.279 [2024-11-17 08:20:26.285392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.279 [2024-11-17 08:20:26.285423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.279 [2024-11-17 08:20:26.285433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.538 [2024-11-17 08:20:26.351332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.538 [2024-11-17 08:20:26.351577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.538 [2024-11-17 08:20:26.351621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.538 [2024-11-17 08:20:26.351636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.538 [2024-11-17 08:20:26.351768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.538 [2024-11-17 08:20:26.351796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.538 [2024-11-17 08:20:26.351817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.538 [2024-11-17 08:20:26.351836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.538 [2024-11-17 08:20:26.351931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.538 [2024-11-17 08:20:26.351948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.538 [2024-11-17 08:20:26.351958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.538 [2024-11-17 08:20:26.351967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.538 [2024-11-17 08:20:26.352075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.538 [2024-11-17 08:20:26.352092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.538 [2024-11-17 08:20:26.352102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.538 [2024-11-17 08:20:26.352110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.538 [2024-11-17 08:20:26.352240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.538 [2024-11-17 08:20:26.352267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.538 [2024-11-17 08:20:26.352277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.538 [2024-11-17 08:20:26.352286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.538 [2024-11-17 08:20:26.352328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.538 [2024-11-17 08:20:26.352346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.538 [2024-11-17 08:20:26.352356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.538 [2024-11-17 08:20:26.352364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.538 [2024-11-17 08:20:26.352439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.538 [2024-11-17 08:20:26.352458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.538 [2024-11-17 08:20:26.352477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.539 [2024-11-17 08:20:26.352493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.539 [2024-11-17 08:20:26.352686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 354.943 ms, result 0 00:20:22.107 00:20:22.107 00:20:22.107 08:20:27 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:24.012 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:24.012 08:20:28 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:24.012 [2024-11-17 08:20:28.963649] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:24.012 [2024-11-17 08:20:28.964052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76703 ] 00:20:24.271 [2024-11-17 08:20:29.137995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.271 [2024-11-17 08:20:29.261673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.530 [2024-11-17 08:20:29.520618] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.530 [2024-11-17 08:20:29.520702] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.790 [2024-11-17 08:20:29.677351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.677398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.790 [2024-11-17 08:20:29.677437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:24.790 [2024-11-17 08:20:29.677447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.677517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.677534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.790 [2024-11-17 08:20:29.677547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:24.790 [2024-11-17 08:20:29.677556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.677581] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:24.790 [2024-11-17 08:20:29.678346] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:24.790 [2024-11-17 08:20:29.678374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.678386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.790 [2024-11-17 08:20:29.678397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:20:24.790 [2024-11-17 08:20:29.678406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.679618] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:24.790 [2024-11-17 08:20:29.693281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.693319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:24.790 [2024-11-17 08:20:29.693349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.665 ms 00:20:24.790 [2024-11-17 08:20:29.693359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.693424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.693441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:24.790 [2024-11-17 08:20:29.693452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:24.790 [2024-11-17 08:20:29.693476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.697771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.697807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.790 [2024-11-17 08:20:29.697836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:20:24.790 [2024-11-17 08:20:29.697845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.697930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.697947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.790 [2024-11-17 08:20:29.697958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:24.790 [2024-11-17 08:20:29.697967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.698014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.698028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:24.790 [2024-11-17 08:20:29.698038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:24.790 [2024-11-17 08:20:29.698047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.790 [2024-11-17 08:20:29.698075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.790 [2024-11-17 08:20:29.702033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.790 [2024-11-17 08:20:29.702261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.791 [2024-11-17 08:20:29.702297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:20:24.791 [2024-11-17 08:20:29.702330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.791 [2024-11-17 08:20:29.702385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.791 [2024-11-17 08:20:29.702416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:24.791 [2024-11-17 08:20:29.702438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:24.791 [2024-11-17 08:20:29.702457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.791 [2024-11-17 08:20:29.702516] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:24.791 [2024-11-17 08:20:29.702562] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:24.791 [2024-11-17 08:20:29.702615] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:24.791 [2024-11-17 08:20:29.702635] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:24.791 [2024-11-17 08:20:29.702730] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:24.791 [2024-11-17 08:20:29.702743] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:24.791 [2024-11-17 08:20:29.702756] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:24.791 [2024-11-17 08:20:29.702769] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:24.791 [2024-11-17 08:20:29.702780] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:24.791 [2024-11-17 08:20:29.702789] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:24.791 [2024-11-17 08:20:29.702813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:24.791 [2024-11-17 08:20:29.702821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:24.791 [2024-11-17 08:20:29.702829] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:24.791 [2024-11-17 08:20:29.702842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.791 [2024-11-17 08:20:29.702851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:24.791 [2024-11-17 08:20:29.702861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:24.791 [2024-11-17 08:20:29.702869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.791 [2024-11-17 08:20:29.702952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.791 [2024-11-17 08:20:29.702965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:24.791 [2024-11-17 08:20:29.702975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:24.791 [2024-11-17 08:20:29.702984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.791 [2024-11-17 08:20:29.703081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:24.791 [2024-11-17 08:20:29.703101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:24.791 [2024-11-17 08:20:29.703111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:24.791 [2024-11-17 08:20:29.703175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:24.791 [2024-11-17 08:20:29.703205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.791 [2024-11-17 08:20:29.703222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:24.791 [2024-11-17 08:20:29.703230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:24.791 [2024-11-17 08:20:29.703238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.791 [2024-11-17 08:20:29.703247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:24.791 [2024-11-17 08:20:29.703271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:24.791 [2024-11-17 08:20:29.703291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:24.791 [2024-11-17 08:20:29.703310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:24.791 [2024-11-17 08:20:29.703336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:24.791 [2024-11-17 08:20:29.703371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:24.791 [2024-11-17 08:20:29.703426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:24.791 [2024-11-17 08:20:29.703468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:24.791 [2024-11-17 08:20:29.703495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.791 [2024-11-17 08:20:29.703513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:24.791 [2024-11-17 08:20:29.703522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:24.791 [2024-11-17 08:20:29.703530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.791 [2024-11-17 08:20:29.703539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:24.791 [2024-11-17 08:20:29.703548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:24.791 [2024-11-17 08:20:29.703557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:24.791 [2024-11-17 08:20:29.703574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:24.791 [2024-11-17 08:20:29.703583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703592] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:24.791 [2024-11-17 08:20:29.703618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:24.791 [2024-11-17 08:20:29.703628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.791 [2024-11-17 08:20:29.703637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.791 [2024-11-17 08:20:29.703648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:24.792 [2024-11-17 08:20:29.703659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:24.792 [2024-11-17 08:20:29.703668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:24.792 [2024-11-17 08:20:29.703693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:24.792 [2024-11-17 08:20:29.703705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:24.792 [2024-11-17 08:20:29.703722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:24.792 [2024-11-17 08:20:29.703739] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:24.792 [2024-11-17 08:20:29.703752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.792 [2024-11-17 08:20:29.703766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:24.792 [2024-11-17 08:20:29.703784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:24.792 [2024-11-17 08:20:29.703803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:24.792 [2024-11-17 08:20:29.703822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:24.792 [2024-11-17 08:20:29.703857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:24.792 [2024-11-17 08:20:29.703875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:24.792 [2024-11-17 08:20:29.703893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:24.792 [2024-11-17 08:20:29.703910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:24.792 [2024-11-17 08:20:29.703929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:24.792 [2024-11-17 08:20:29.703946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:24.792 [2024-11-17 08:20:29.703957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:24.792 [2024-11-17 08:20:29.703967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:24.792 [2024-11-17 08:20:29.703976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:24.792 [2024-11-17 08:20:29.703986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:24.792 [2024-11-17 08:20:29.703996] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:24.792 [2024-11-17 08:20:29.704013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.792 [2024-11-17 08:20:29.704028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:24.792 [2024-11-17 08:20:29.704045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:24.792 [2024-11-17 08:20:29.704056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:24.792 [2024-11-17 08:20:29.704072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:24.792 [2024-11-17 08:20:29.704093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.704113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:24.792 [2024-11-17 08:20:29.704132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:20:24.792 [2024-11-17 08:20:29.704150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.792 [2024-11-17 08:20:29.731696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.731742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.792 [2024-11-17 08:20:29.731759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.412 ms 00:20:24.792 [2024-11-17 08:20:29.731768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.792 [2024-11-17 08:20:29.731866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.731880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:24.792 [2024-11-17 08:20:29.731890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:24.792 [2024-11-17 08:20:29.731898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.792 [2024-11-17 08:20:29.773271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.773318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.792 [2024-11-17 08:20:29.773351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.298 ms 00:20:24.792 [2024-11-17 08:20:29.773361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.792 [2024-11-17 08:20:29.773418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.773433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.792 [2024-11-17 08:20:29.773444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:24.792 [2024-11-17 08:20:29.773458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.792 [2024-11-17 08:20:29.773832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.773847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.792 [2024-11-17 08:20:29.773858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:20:24.792 [2024-11-17 08:20:29.773866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.792 [2024-11-17 08:20:29.773991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.774013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.792 [2024-11-17 08:20:29.774025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:24.792 [2024-11-17 08:20:29.774039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.792 [2024-11-17 08:20:29.787398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.792 [2024-11-17 08:20:29.787434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.792 [2024-11-17 08:20:29.787468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.337 ms 00:20:24.792 [2024-11-17 08:20:29.787477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.801179] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:25.052 [2024-11-17 08:20:29.801244] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:25.052 [2024-11-17 08:20:29.801277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.801287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:25.052 [2024-11-17 08:20:29.801298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.687 ms 00:20:25.052 [2024-11-17 08:20:29.801308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.825191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.825232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:25.052 [2024-11-17 08:20:29.825246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:20:25.052 [2024-11-17 08:20:29.825256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.837463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.837498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:25.052 [2024-11-17 08:20:29.837511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.168 ms 00:20:25.052 [2024-11-17 08:20:29.837519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.849928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.849964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:25.052 [2024-11-17 08:20:29.849993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.373 ms 00:20:25.052 [2024-11-17 08:20:29.850002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.850865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.850899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:25.052 [2024-11-17 08:20:29.850943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:20:25.052 [2024-11-17 08:20:29.850956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.910060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.910139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:25.052 [2024-11-17 08:20:29.910163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.080 ms 00:20:25.052 [2024-11-17 08:20:29.910172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.920236] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:25.052 [2024-11-17 08:20:29.922187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.922218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:25.052 [2024-11-17 08:20:29.922232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.959 ms 00:20:25.052 [2024-11-17 08:20:29.922241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.922351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.922368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:25.052 [2024-11-17 08:20:29.922379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:25.052 [2024-11-17 08:20:29.922390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.922466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.922481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:25.052 [2024-11-17 08:20:29.922490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:25.052 [2024-11-17 08:20:29.922499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.922521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.922532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:25.052 [2024-11-17 08:20:29.922542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:25.052 [2024-11-17 08:20:29.922550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.922586] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:25.052 [2024-11-17 08:20:29.922603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.922612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:25.052 [2024-11-17 08:20:29.922620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:25.052 [2024-11-17 08:20:29.922629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.947282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.947472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:25.052 [2024-11-17 08:20:29.947509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.634 ms 00:20:25.052 [2024-11-17 08:20:29.947560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.947681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.052 [2024-11-17 08:20:29.947730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:25.052 [2024-11-17 08:20:29.947778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:25.052 [2024-11-17 08:20:29.947789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.052 [2024-11-17 08:20:29.949156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.143 ms, result 0 00:20:25.989  [2024-11-17T08:20:32.378Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-17T08:20:33.313Z] Copying: 47/1024 [MB] (23 MBps) [2024-11-17T08:20:34.249Z] Copying: 71/1024 [MB] (23 MBps) [2024-11-17T08:20:35.185Z] Copying: 94/1024 [MB] (23 MBps) [2024-11-17T08:20:36.121Z] Copying: 118/1024 [MB] (23 MBps) [2024-11-17T08:20:37.057Z] Copying: 142/1024 [MB] (24 MBps) [2024-11-17T08:20:37.993Z] Copying: 166/1024 [MB] (23 MBps) [2024-11-17T08:20:39.371Z] Copying: 191/1024 [MB] (24 MBps) [2024-11-17T08:20:40.308Z] Copying: 214/1024 [MB] (23 MBps) [2024-11-17T08:20:41.246Z] Copying: 237/1024 [MB] (23 MBps) [2024-11-17T08:20:42.184Z] Copying: 261/1024 [MB] (23 MBps) [2024-11-17T08:20:43.122Z] Copying: 284/1024 [MB] (23 MBps) [2024-11-17T08:20:44.060Z] Copying: 308/1024 [MB] (23 MBps) [2024-11-17T08:20:45.046Z] Copying: 331/1024 [MB] (23 MBps) [2024-11-17T08:20:46.009Z] Copying: 355/1024 [MB] (23 MBps) [2024-11-17T08:20:47.389Z] Copying: 379/1024 [MB] (23 MBps) [2024-11-17T08:20:48.325Z] Copying: 403/1024 [MB] (24 MBps) [2024-11-17T08:20:49.262Z] Copying: 427/1024 [MB] (24 MBps) [2024-11-17T08:20:50.197Z] Copying: 451/1024 [MB] (24 MBps) [2024-11-17T08:20:51.133Z] Copying: 475/1024 [MB] (23 MBps) [2024-11-17T08:20:52.069Z] Copying: 499/1024 [MB] (24 MBps) [2024-11-17T08:20:53.005Z] Copying: 523/1024 [MB] (24 MBps) [2024-11-17T08:20:54.382Z] Copying: 548/1024 [MB] (24 MBps) [2024-11-17T08:20:55.320Z] Copying: 571/1024 [MB] (23 MBps) [2024-11-17T08:20:56.257Z] Copying: 595/1024 [MB] (24 MBps) [2024-11-17T08:20:57.207Z] Copying: 619/1024 [MB] (24 MBps) [2024-11-17T08:20:58.144Z] Copying: 643/1024 [MB] (23 MBps) [2024-11-17T08:20:59.081Z] Copying: 667/1024 [MB] (23 MBps) [2024-11-17T08:21:00.019Z] Copying: 691/1024 [MB] (23 MBps) [2024-11-17T08:21:01.399Z] Copying: 715/1024 [MB] (24 MBps) [2024-11-17T08:21:01.968Z] Copying: 739/1024 [MB] (23 MBps) [2024-11-17T08:21:03.346Z] Copying: 762/1024 [MB] (23 MBps) [2024-11-17T08:21:04.283Z] Copying: 786/1024 [MB] (23 MBps) [2024-11-17T08:21:05.220Z] Copying: 809/1024 [MB] (23 MBps) [2024-11-17T08:21:06.155Z] Copying: 832/1024 [MB] (23 MBps) [2024-11-17T08:21:07.091Z] Copying: 856/1024 [MB] (24 MBps) [2024-11-17T08:21:08.027Z] Copying: 880/1024 [MB] (24 MBps) [2024-11-17T08:21:08.964Z] Copying: 904/1024 [MB] (24 MBps) [2024-11-17T08:21:10.340Z] Copying: 928/1024 [MB] (23 MBps) [2024-11-17T08:21:11.276Z] Copying: 952/1024 [MB] (23 MBps) [2024-11-17T08:21:12.211Z] Copying: 976/1024 [MB] (23 MBps) [2024-11-17T08:21:13.145Z] Copying: 1000/1024 [MB] (24 MBps) [2024-11-17T08:21:14.081Z] Copying: 1023/1024 [MB] (22 MBps) [2024-11-17T08:21:14.081Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-17 08:21:13.922567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.069 [2024-11-17 08:21:13.922697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.069 [2024-11-17 08:21:13.922749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.069 [2024-11-17 08:21:13.922776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.069 [2024-11-17 08:21:13.927133] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.069 [2024-11-17 08:21:13.931259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.069 [2024-11-17 08:21:13.931293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.069 [2024-11-17 08:21:13.931306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.085 ms 00:21:09.069 [2024-11-17 08:21:13.931315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.069 [2024-11-17 08:21:13.942328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.069 [2024-11-17 08:21:13.942366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.069 [2024-11-17 08:21:13.942381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.885 ms 00:21:09.069 [2024-11-17 08:21:13.942391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.069 [2024-11-17 08:21:13.962778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.069 [2024-11-17 08:21:13.962828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.069 [2024-11-17 08:21:13.962860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.363 ms 00:21:09.069 [2024-11-17 08:21:13.962869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.069 [2024-11-17 08:21:13.968256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.069 [2024-11-17 08:21:13.968284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.069 [2024-11-17 08:21:13.968296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.352 ms 00:21:09.069 [2024-11-17 08:21:13.968304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.069 [2024-11-17 08:21:13.992495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.069 [2024-11-17 08:21:13.992531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.069 [2024-11-17 08:21:13.992544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.145 ms 00:21:09.069 [2024-11-17 08:21:13.992553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.069 [2024-11-17 08:21:14.006838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.069 [2024-11-17 08:21:14.006879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.069 [2024-11-17 08:21:14.006893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.249 ms 00:21:09.069 [2024-11-17 08:21:14.006902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.329 [2024-11-17 08:21:14.111562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.329 [2024-11-17 08:21:14.111606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.329 [2024-11-17 08:21:14.111636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.620 ms 00:21:09.329 [2024-11-17 08:21:14.111647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.329 [2024-11-17 08:21:14.136457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.329 [2024-11-17 08:21:14.136488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:09.330 [2024-11-17 08:21:14.136501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.791 ms 00:21:09.330 [2024-11-17 08:21:14.136509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.330 [2024-11-17 08:21:14.160758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.330 [2024-11-17 08:21:14.160798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:09.330 [2024-11-17 08:21:14.160810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.213 ms 00:21:09.330 [2024-11-17 08:21:14.160819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.330 [2024-11-17 08:21:14.184704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.330 [2024-11-17 08:21:14.184733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.330 [2024-11-17 08:21:14.184745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.850 ms 00:21:09.330 [2024-11-17 08:21:14.184753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.330 [2024-11-17 08:21:14.208431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.330 [2024-11-17 08:21:14.208461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.330 [2024-11-17 08:21:14.208474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.621 ms 00:21:09.330 [2024-11-17 08:21:14.208482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.330 [2024-11-17 08:21:14.208517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.330 [2024-11-17 08:21:14.208536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114176 / 261120 wr_cnt: 1 state: open 00:21:09.330 [2024-11-17 08:21:14.208547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.208998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.330 [2024-11-17 08:21:14.209289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.331 [2024-11-17 08:21:14.209566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.331 [2024-11-17 08:21:14.209575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7d448d0-7c76-4123-9b8f-0733c52f8442 00:21:09.331 [2024-11-17 08:21:14.209592] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114176 00:21:09.331 [2024-11-17 08:21:14.209600] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115136 00:21:09.331 [2024-11-17 08:21:14.209610] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114176 00:21:09.331 [2024-11-17 08:21:14.209619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:21:09.331 [2024-11-17 08:21:14.209628] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.331 [2024-11-17 08:21:14.209640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.331 [2024-11-17 08:21:14.209664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.331 [2024-11-17 08:21:14.209676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.331 [2024-11-17 08:21:14.209685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.331 [2024-11-17 08:21:14.209694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.331 [2024-11-17 08:21:14.209703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.331 [2024-11-17 08:21:14.209712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:21:09.331 [2024-11-17 08:21:14.209722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.331 [2024-11-17 08:21:14.222813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.331 [2024-11-17 08:21:14.222840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.331 [2024-11-17 08:21:14.222852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.071 ms 00:21:09.331 [2024-11-17 08:21:14.222867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.331 [2024-11-17 08:21:14.223275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.331 [2024-11-17 08:21:14.223292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.331 [2024-11-17 08:21:14.223303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:21:09.331 [2024-11-17 08:21:14.223312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.331 [2024-11-17 08:21:14.256163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.331 [2024-11-17 08:21:14.256209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.331 [2024-11-17 08:21:14.256226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.331 [2024-11-17 08:21:14.256236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.331 [2024-11-17 08:21:14.256287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.331 [2024-11-17 08:21:14.256300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.331 [2024-11-17 08:21:14.256310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.331 [2024-11-17 08:21:14.256319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.331 [2024-11-17 08:21:14.256432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.331 [2024-11-17 08:21:14.256450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.331 [2024-11-17 08:21:14.256460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.331 [2024-11-17 08:21:14.256475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.331 [2024-11-17 08:21:14.256494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.331 [2024-11-17 08:21:14.256506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.331 [2024-11-17 08:21:14.256516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.331 [2024-11-17 08:21:14.256525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.331 [2024-11-17 08:21:14.334608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.331 [2024-11-17 08:21:14.334692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.331 [2024-11-17 08:21:14.334713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.331 [2024-11-17 08:21:14.334722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.589 [2024-11-17 08:21:14.401052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.589 [2024-11-17 08:21:14.401129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.589 [2024-11-17 08:21:14.401145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.589 [2024-11-17 08:21:14.401155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.589 [2024-11-17 08:21:14.401228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.589 [2024-11-17 08:21:14.401244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.589 [2024-11-17 08:21:14.401254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.589 [2024-11-17 08:21:14.401263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.589 [2024-11-17 08:21:14.401327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.589 [2024-11-17 08:21:14.401341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.589 [2024-11-17 08:21:14.401366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.590 [2024-11-17 08:21:14.401375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.590 [2024-11-17 08:21:14.401514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.590 [2024-11-17 08:21:14.401541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.590 [2024-11-17 08:21:14.401552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.590 [2024-11-17 08:21:14.401561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.590 [2024-11-17 08:21:14.401609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.590 [2024-11-17 08:21:14.401625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:09.590 [2024-11-17 08:21:14.401635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.590 [2024-11-17 08:21:14.401644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.590 [2024-11-17 08:21:14.401683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.590 [2024-11-17 08:21:14.401696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.590 [2024-11-17 08:21:14.401706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.590 [2024-11-17 08:21:14.401715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.590 [2024-11-17 08:21:14.401765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.590 [2024-11-17 08:21:14.401780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.590 [2024-11-17 08:21:14.401789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.590 [2024-11-17 08:21:14.401798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.590 [2024-11-17 08:21:14.401922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 480.869 ms, result 0 00:21:10.965 00:21:10.965 00:21:10.965 08:21:15 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:11.224 [2024-11-17 08:21:16.044387] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:11.224 [2024-11-17 08:21:16.044555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77172 ] 00:21:11.224 [2024-11-17 08:21:16.220718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.483 [2024-11-17 08:21:16.308938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.742 [2024-11-17 08:21:16.567740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.742 [2024-11-17 08:21:16.568014] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.743 [2024-11-17 08:21:16.723328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.723565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.743 [2024-11-17 08:21:16.723602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.743 [2024-11-17 08:21:16.723615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.723681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.723697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.743 [2024-11-17 08:21:16.723726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:11.743 [2024-11-17 08:21:16.723736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.723765] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.743 [2024-11-17 08:21:16.724631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.743 [2024-11-17 08:21:16.724669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.724683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.743 [2024-11-17 08:21:16.724694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:21:11.743 [2024-11-17 08:21:16.724704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.725877] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:11.743 [2024-11-17 08:21:16.738632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.738801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:11.743 [2024-11-17 08:21:16.738827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.757 ms 00:21:11.743 [2024-11-17 08:21:16.738838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.738938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.738956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:11.743 [2024-11-17 08:21:16.738967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:11.743 [2024-11-17 08:21:16.738977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.743116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.743152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.743 [2024-11-17 08:21:16.743165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.061 ms 00:21:11.743 [2024-11-17 08:21:16.743174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.743253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.743268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.743 [2024-11-17 08:21:16.743284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:11.743 [2024-11-17 08:21:16.743293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.743346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.743360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.743 [2024-11-17 08:21:16.743370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:11.743 [2024-11-17 08:21:16.743378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.743430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:11.743 [2024-11-17 08:21:16.746939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.746969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.743 [2024-11-17 08:21:16.746981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.516 ms 00:21:11.743 [2024-11-17 08:21:16.746994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.747024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.747036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.743 [2024-11-17 08:21:16.747046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:11.743 [2024-11-17 08:21:16.747055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.747093] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:11.743 [2024-11-17 08:21:16.747119] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:11.743 [2024-11-17 08:21:16.747152] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:11.743 [2024-11-17 08:21:16.747170] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:11.743 [2024-11-17 08:21:16.747277] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.743 [2024-11-17 08:21:16.747290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.743 [2024-11-17 08:21:16.747302] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:11.743 [2024-11-17 08:21:16.747314] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.743 [2024-11-17 08:21:16.747324] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.743 [2024-11-17 08:21:16.747334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:11.743 [2024-11-17 08:21:16.747343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.743 [2024-11-17 08:21:16.747351] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.743 [2024-11-17 08:21:16.747360] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.743 [2024-11-17 08:21:16.747373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.747383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.743 [2024-11-17 08:21:16.747419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:21:11.743 [2024-11-17 08:21:16.747429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.747558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.743 [2024-11-17 08:21:16.747571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.743 [2024-11-17 08:21:16.747582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:11.743 [2024-11-17 08:21:16.747592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.743 [2024-11-17 08:21:16.747735] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.743 [2024-11-17 08:21:16.747759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.743 [2024-11-17 08:21:16.747771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.743 [2024-11-17 08:21:16.747782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.743 [2024-11-17 08:21:16.747792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.743 [2024-11-17 08:21:16.747801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.743 [2024-11-17 08:21:16.747811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:11.743 [2024-11-17 08:21:16.747820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.743 [2024-11-17 08:21:16.747829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:11.743 [2024-11-17 08:21:16.747838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.743 [2024-11-17 08:21:16.747847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.743 [2024-11-17 08:21:16.747856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:11.743 [2024-11-17 08:21:16.747867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.743 [2024-11-17 08:21:16.747876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.743 [2024-11-17 08:21:16.747885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:11.743 [2024-11-17 08:21:16.747905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.743 [2024-11-17 08:21:16.747915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.743 [2024-11-17 08:21:16.747924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:11.743 [2024-11-17 08:21:16.747933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.743 [2024-11-17 08:21:16.747942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.743 [2024-11-17 08:21:16.747951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:11.743 [2024-11-17 08:21:16.747960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.744 [2024-11-17 08:21:16.747985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.744 [2024-11-17 08:21:16.747994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:11.744 [2024-11-17 08:21:16.748003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.744 [2024-11-17 08:21:16.748012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.744 [2024-11-17 08:21:16.748037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:11.744 [2024-11-17 08:21:16.748046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.744 [2024-11-17 08:21:16.748055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.744 [2024-11-17 08:21:16.748065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:11.744 [2024-11-17 08:21:16.748074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.744 [2024-11-17 08:21:16.748084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.744 [2024-11-17 08:21:16.748094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:11.744 [2024-11-17 08:21:16.748103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.744 [2024-11-17 08:21:16.748112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.744 [2024-11-17 08:21:16.748122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:11.744 [2024-11-17 08:21:16.748136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.744 [2024-11-17 08:21:16.748150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.744 [2024-11-17 08:21:16.748180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:11.744 [2024-11-17 08:21:16.748194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.744 [2024-11-17 08:21:16.748204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.744 [2024-11-17 08:21:16.748214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:11.744 [2024-11-17 08:21:16.748223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.744 [2024-11-17 08:21:16.748232] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.744 [2024-11-17 08:21:16.748245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.744 [2024-11-17 08:21:16.748262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.744 [2024-11-17 08:21:16.748278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.744 [2024-11-17 08:21:16.748293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.744 [2024-11-17 08:21:16.748309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.744 [2024-11-17 08:21:16.748325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.744 [2024-11-17 08:21:16.748341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.744 [2024-11-17 08:21:16.748356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.744 [2024-11-17 08:21:16.748372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.744 [2024-11-17 08:21:16.748390] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.744 [2024-11-17 08:21:16.748412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.744 [2024-11-17 08:21:16.748431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:11.744 [2024-11-17 08:21:16.748449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:11.744 [2024-11-17 08:21:16.748479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:11.744 [2024-11-17 08:21:16.748496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:11.744 [2024-11-17 08:21:16.748511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:11.744 [2024-11-17 08:21:16.748542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:11.744 [2024-11-17 08:21:16.748558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:11.744 [2024-11-17 08:21:16.748575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:11.744 [2024-11-17 08:21:16.748606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:11.744 [2024-11-17 08:21:16.748619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:11.744 [2024-11-17 08:21:16.748630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:11.744 [2024-11-17 08:21:16.748641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:11.744 [2024-11-17 08:21:16.748651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:11.744 [2024-11-17 08:21:16.748662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:11.744 [2024-11-17 08:21:16.748672] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.744 [2024-11-17 08:21:16.748690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.744 [2024-11-17 08:21:16.748701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.744 [2024-11-17 08:21:16.748712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.744 [2024-11-17 08:21:16.748722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.744 [2024-11-17 08:21:16.748732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.744 [2024-11-17 08:21:16.748744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.744 [2024-11-17 08:21:16.748756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.744 [2024-11-17 08:21:16.748767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:21:11.744 [2024-11-17 08:21:16.748777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.776202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.776252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.004 [2024-11-17 08:21:16.776285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.363 ms 00:21:12.004 [2024-11-17 08:21:16.776295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.776397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.776410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:12.004 [2024-11-17 08:21:16.776421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:12.004 [2024-11-17 08:21:16.776430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.817146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.817191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.004 [2024-11-17 08:21:16.817222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.626 ms 00:21:12.004 [2024-11-17 08:21:16.817232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.817286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.817301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.004 [2024-11-17 08:21:16.817317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:12.004 [2024-11-17 08:21:16.817331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.817740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.817764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.004 [2024-11-17 08:21:16.817777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:21:12.004 [2024-11-17 08:21:16.817787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.817941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.817958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.004 [2024-11-17 08:21:16.817969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:21:12.004 [2024-11-17 08:21:16.817985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.831381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.831423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.004 [2024-11-17 08:21:16.831457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.372 ms 00:21:12.004 [2024-11-17 08:21:16.831467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.844318] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:12.004 [2024-11-17 08:21:16.844355] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.004 [2024-11-17 08:21:16.844370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.844380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.004 [2024-11-17 08:21:16.844390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.793 ms 00:21:12.004 [2024-11-17 08:21:16.844399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.868007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.868105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.004 [2024-11-17 08:21:16.868122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.569 ms 00:21:12.004 [2024-11-17 08:21:16.868133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.880956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.881000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.004 [2024-11-17 08:21:16.881028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.780 ms 00:21:12.004 [2024-11-17 08:21:16.881038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.893868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.893903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.004 [2024-11-17 08:21:16.893931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.792 ms 00:21:12.004 [2024-11-17 08:21:16.893940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.894725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.894773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.004 [2024-11-17 08:21:16.894816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:21:12.004 [2024-11-17 08:21:16.894845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.952221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.952286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.004 [2024-11-17 08:21:16.952308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.352 ms 00:21:12.004 [2024-11-17 08:21:16.952318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.963398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:12.004 [2024-11-17 08:21:16.965678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.965713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.004 [2024-11-17 08:21:16.965728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.301 ms 00:21:12.004 [2024-11-17 08:21:16.965738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.965885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.965905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.004 [2024-11-17 08:21:16.965917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:12.004 [2024-11-17 08:21:16.965930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.967275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.967463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.004 [2024-11-17 08:21:16.967488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:21:12.004 [2024-11-17 08:21:16.967500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.967537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.004 [2024-11-17 08:21:16.967550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.004 [2024-11-17 08:21:16.967562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:12.004 [2024-11-17 08:21:16.967572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.004 [2024-11-17 08:21:16.967613] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.004 [2024-11-17 08:21:16.967632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.005 [2024-11-17 08:21:16.967642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.005 [2024-11-17 08:21:16.967653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:12.005 [2024-11-17 08:21:16.967663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.005 [2024-11-17 08:21:16.991904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.005 [2024-11-17 08:21:16.991941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.005 [2024-11-17 08:21:16.991955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.219 ms 00:21:12.005 [2024-11-17 08:21:16.991970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.005 [2024-11-17 08:21:16.992056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.005 [2024-11-17 08:21:16.992076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.005 [2024-11-17 08:21:16.992121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:12.005 [2024-11-17 08:21:16.992131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.005 [2024-11-17 08:21:16.993520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 269.481 ms, result 0 00:21:13.383  [2024-11-17T08:21:19.449Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-17T08:21:20.385Z] Copying: 41/1024 [MB] (22 MBps) [2024-11-17T08:21:21.323Z] Copying: 64/1024 [MB] (23 MBps) [2024-11-17T08:21:22.258Z] Copying: 86/1024 [MB] (22 MBps) [2024-11-17T08:21:23.195Z] Copying: 109/1024 [MB] (22 MBps) [2024-11-17T08:21:24.573Z] Copying: 132/1024 [MB] (22 MBps) [2024-11-17T08:21:25.512Z] Copying: 155/1024 [MB] (22 MBps) [2024-11-17T08:21:26.459Z] Copying: 178/1024 [MB] (23 MBps) [2024-11-17T08:21:27.396Z] Copying: 201/1024 [MB] (23 MBps) [2024-11-17T08:21:28.333Z] Copying: 224/1024 [MB] (22 MBps) [2024-11-17T08:21:29.270Z] Copying: 246/1024 [MB] (22 MBps) [2024-11-17T08:21:30.205Z] Copying: 269/1024 [MB] (22 MBps) [2024-11-17T08:21:31.582Z] Copying: 292/1024 [MB] (23 MBps) [2024-11-17T08:21:32.522Z] Copying: 315/1024 [MB] (22 MBps) [2024-11-17T08:21:33.459Z] Copying: 337/1024 [MB] (22 MBps) [2024-11-17T08:21:34.397Z] Copying: 360/1024 [MB] (22 MBps) [2024-11-17T08:21:35.359Z] Copying: 382/1024 [MB] (22 MBps) [2024-11-17T08:21:36.293Z] Copying: 405/1024 [MB] (22 MBps) [2024-11-17T08:21:37.228Z] Copying: 427/1024 [MB] (22 MBps) [2024-11-17T08:21:38.605Z] Copying: 450/1024 [MB] (22 MBps) [2024-11-17T08:21:39.173Z] Copying: 473/1024 [MB] (22 MBps) [2024-11-17T08:21:40.548Z] Copying: 495/1024 [MB] (22 MBps) [2024-11-17T08:21:41.486Z] Copying: 517/1024 [MB] (22 MBps) [2024-11-17T08:21:42.423Z] Copying: 539/1024 [MB] (21 MBps) [2024-11-17T08:21:43.360Z] Copying: 561/1024 [MB] (21 MBps) [2024-11-17T08:21:44.297Z] Copying: 583/1024 [MB] (22 MBps) [2024-11-17T08:21:45.235Z] Copying: 606/1024 [MB] (22 MBps) [2024-11-17T08:21:46.173Z] Copying: 628/1024 [MB] (22 MBps) [2024-11-17T08:21:47.551Z] Copying: 650/1024 [MB] (22 MBps) [2024-11-17T08:21:48.488Z] Copying: 672/1024 [MB] (21 MBps) [2024-11-17T08:21:49.424Z] Copying: 694/1024 [MB] (21 MBps) [2024-11-17T08:21:50.407Z] Copying: 716/1024 [MB] (22 MBps) [2024-11-17T08:21:51.365Z] Copying: 739/1024 [MB] (22 MBps) [2024-11-17T08:21:52.302Z] Copying: 761/1024 [MB] (22 MBps) [2024-11-17T08:21:53.238Z] Copying: 783/1024 [MB] (22 MBps) [2024-11-17T08:21:54.175Z] Copying: 805/1024 [MB] (22 MBps) [2024-11-17T08:21:55.553Z] Copying: 828/1024 [MB] (22 MBps) [2024-11-17T08:21:56.490Z] Copying: 850/1024 [MB] (21 MBps) [2024-11-17T08:21:57.428Z] Copying: 872/1024 [MB] (21 MBps) [2024-11-17T08:21:58.365Z] Copying: 894/1024 [MB] (21 MBps) [2024-11-17T08:21:59.302Z] Copying: 916/1024 [MB] (21 MBps) [2024-11-17T08:22:00.240Z] Copying: 938/1024 [MB] (22 MBps) [2024-11-17T08:22:01.177Z] Copying: 960/1024 [MB] (22 MBps) [2024-11-17T08:22:02.555Z] Copying: 982/1024 [MB] (22 MBps) [2024-11-17T08:22:03.123Z] Copying: 1004/1024 [MB] (21 MBps) [2024-11-17T08:22:03.383Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-17 08:22:03.345602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.371 [2024-11-17 08:22:03.345691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:58.371 [2024-11-17 08:22:03.345724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.371 [2024-11-17 08:22:03.345743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.371 [2024-11-17 08:22:03.345808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.371 [2024-11-17 08:22:03.351380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.371 [2024-11-17 08:22:03.351660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:58.371 [2024-11-17 08:22:03.351858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.535 ms 00:21:58.371 [2024-11-17 08:22:03.351939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.371 [2024-11-17 08:22:03.352505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.371 [2024-11-17 08:22:03.352721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:58.371 [2024-11-17 08:22:03.352908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:21:58.371 [2024-11-17 08:22:03.352986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.371 [2024-11-17 08:22:03.358384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.371 [2024-11-17 08:22:03.358653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:58.371 [2024-11-17 08:22:03.358811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.194 ms 00:21:58.371 [2024-11-17 08:22:03.358863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.371 [2024-11-17 08:22:03.364630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.371 [2024-11-17 08:22:03.364814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:58.371 [2024-11-17 08:22:03.364933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.583 ms 00:21:58.371 [2024-11-17 08:22:03.364984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.630 [2024-11-17 08:22:03.393567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.630 [2024-11-17 08:22:03.393757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:58.630 [2024-11-17 08:22:03.393886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.473 ms 00:21:58.630 [2024-11-17 08:22:03.394002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.630 [2024-11-17 08:22:03.412665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.630 [2024-11-17 08:22:03.412826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:58.630 [2024-11-17 08:22:03.412946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.568 ms 00:21:58.630 [2024-11-17 08:22:03.412999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.630 [2024-11-17 08:22:03.523640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.630 [2024-11-17 08:22:03.523834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:58.630 [2024-11-17 08:22:03.523975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.455 ms 00:21:58.630 [2024-11-17 08:22:03.524028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.630 [2024-11-17 08:22:03.555704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.630 [2024-11-17 08:22:03.555906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:58.630 [2024-11-17 08:22:03.556048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.454 ms 00:21:58.630 [2024-11-17 08:22:03.556096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.630 [2024-11-17 08:22:03.594774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.630 [2024-11-17 08:22:03.595014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:58.630 [2024-11-17 08:22:03.595220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.458 ms 00:21:58.630 [2024-11-17 08:22:03.595246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.630 [2024-11-17 08:22:03.628820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.630 [2024-11-17 08:22:03.628851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:58.630 [2024-11-17 08:22:03.628863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.526 ms 00:21:58.630 [2024-11-17 08:22:03.628872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.892 [2024-11-17 08:22:03.658611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.892 [2024-11-17 08:22:03.658658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:58.892 [2024-11-17 08:22:03.658689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.659 ms 00:21:58.892 [2024-11-17 08:22:03.658700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.892 [2024-11-17 08:22:03.658746] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:58.892 [2024-11-17 08:22:03.658769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:58.892 [2024-11-17 08:22:03.658783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.658984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:58.892 [2024-11-17 08:22:03.659618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.659994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.660005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:58.893 [2024-11-17 08:22:03.660025] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:58.893 [2024-11-17 08:22:03.660037] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f7d448d0-7c76-4123-9b8f-0733c52f8442 00:21:58.893 [2024-11-17 08:22:03.660048] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:58.893 [2024-11-17 08:22:03.660069] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 17856 00:21:58.893 [2024-11-17 08:22:03.660080] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 16896 00:21:58.893 [2024-11-17 08:22:03.660092] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0568 00:21:58.893 [2024-11-17 08:22:03.660104] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:58.893 [2024-11-17 08:22:03.660138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:58.893 [2024-11-17 08:22:03.660149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:58.893 [2024-11-17 08:22:03.660173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:58.893 [2024-11-17 08:22:03.660183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:58.893 [2024-11-17 08:22:03.660194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.893 [2024-11-17 08:22:03.660205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:58.893 [2024-11-17 08:22:03.660217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:21:58.893 [2024-11-17 08:22:03.660228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.676407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.893 [2024-11-17 08:22:03.676444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:58.893 [2024-11-17 08:22:03.676458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.106 ms 00:21:58.893 [2024-11-17 08:22:03.676475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.676812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.893 [2024-11-17 08:22:03.676827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:58.893 [2024-11-17 08:22:03.676838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:21:58.893 [2024-11-17 08:22:03.676847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.714207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.714259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.893 [2024-11-17 08:22:03.714280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.714290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.714346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.714367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.893 [2024-11-17 08:22:03.714377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.714386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.714495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.714528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.893 [2024-11-17 08:22:03.714540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.714557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.714579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.714591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.893 [2024-11-17 08:22:03.714602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.714612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.793986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.794273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.893 [2024-11-17 08:22:03.794310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.794322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.859247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.859295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.893 [2024-11-17 08:22:03.859312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.859322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.859431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.859448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.893 [2024-11-17 08:22:03.859460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.859469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.859517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.859531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.893 [2024-11-17 08:22:03.859541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.859550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.859653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.859671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.893 [2024-11-17 08:22:03.859682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.859691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.859740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.859757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:58.893 [2024-11-17 08:22:03.859768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.859777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.859831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.859842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.893 [2024-11-17 08:22:03.859852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.859861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.893 [2024-11-17 08:22:03.859907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.893 [2024-11-17 08:22:03.859921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.893 [2024-11-17 08:22:03.859931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.893 [2024-11-17 08:22:03.859939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.894 [2024-11-17 08:22:03.860061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 514.445 ms, result 0 00:21:59.832 00:21:59.832 00:21:59.832 08:22:04 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:01.736 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 75538 00:22:01.736 08:22:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 75538 ']' 00:22:01.736 08:22:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 75538 00:22:01.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75538) - No such process 00:22:01.736 Process with pid 75538 is not found 00:22:01.736 Remove shared memory files 00:22:01.736 08:22:06 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 75538 is not found' 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:01.736 08:22:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:01.736 ************************************ 00:22:01.736 END TEST ftl_restore 00:22:01.736 ************************************ 00:22:01.736 00:22:01.736 real 3m30.645s 00:22:01.736 user 3m16.620s 00:22:01.736 sys 0m14.864s 00:22:01.736 08:22:06 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.736 08:22:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:01.736 08:22:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:01.736 08:22:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:01.736 08:22:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.736 08:22:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:01.736 ************************************ 00:22:01.736 START TEST ftl_dirty_shutdown 00:22:01.736 ************************************ 00:22:01.736 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:01.736 * Looking for test storage... 00:22:01.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.736 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:01.736 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:01.736 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.995 --rc genhtml_branch_coverage=1 00:22:01.995 --rc genhtml_function_coverage=1 00:22:01.995 --rc genhtml_legend=1 00:22:01.995 --rc geninfo_all_blocks=1 00:22:01.995 --rc geninfo_unexecuted_blocks=1 00:22:01.995 00:22:01.995 ' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.995 --rc genhtml_branch_coverage=1 00:22:01.995 --rc genhtml_function_coverage=1 00:22:01.995 --rc genhtml_legend=1 00:22:01.995 --rc geninfo_all_blocks=1 00:22:01.995 --rc geninfo_unexecuted_blocks=1 00:22:01.995 00:22:01.995 ' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.995 --rc genhtml_branch_coverage=1 00:22:01.995 --rc genhtml_function_coverage=1 00:22:01.995 --rc genhtml_legend=1 00:22:01.995 --rc geninfo_all_blocks=1 00:22:01.995 --rc geninfo_unexecuted_blocks=1 00:22:01.995 00:22:01.995 ' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:01.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.995 --rc genhtml_branch_coverage=1 00:22:01.995 --rc genhtml_function_coverage=1 00:22:01.995 --rc genhtml_legend=1 00:22:01.995 --rc geninfo_all_blocks=1 00:22:01.995 --rc geninfo_unexecuted_blocks=1 00:22:01.995 00:22:01.995 ' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:01.995 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77740 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77740 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77740 ']' 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.996 08:22:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:01.996 [2024-11-17 08:22:06.982945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:01.996 [2024-11-17 08:22:06.983323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77740 ] 00:22:02.255 [2024-11-17 08:22:07.168831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.515 [2024-11-17 08:22:07.281382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:03.083 08:22:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:03.341 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:03.600 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:03.601 { 00:22:03.601 "name": "nvme0n1", 00:22:03.601 "aliases": [ 00:22:03.601 "cd1177e9-d219-40a8-af04-19885e40c5d5" 00:22:03.601 ], 00:22:03.601 "product_name": "NVMe disk", 00:22:03.601 "block_size": 4096, 00:22:03.601 "num_blocks": 1310720, 00:22:03.601 "uuid": "cd1177e9-d219-40a8-af04-19885e40c5d5", 00:22:03.601 "numa_id": -1, 00:22:03.601 "assigned_rate_limits": { 00:22:03.601 "rw_ios_per_sec": 0, 00:22:03.601 "rw_mbytes_per_sec": 0, 00:22:03.601 "r_mbytes_per_sec": 0, 00:22:03.601 "w_mbytes_per_sec": 0 00:22:03.601 }, 00:22:03.601 "claimed": true, 00:22:03.601 "claim_type": "read_many_write_one", 00:22:03.601 "zoned": false, 00:22:03.601 "supported_io_types": { 00:22:03.601 "read": true, 00:22:03.601 "write": true, 00:22:03.601 "unmap": true, 00:22:03.601 "flush": true, 00:22:03.601 "reset": true, 00:22:03.601 "nvme_admin": true, 00:22:03.601 "nvme_io": true, 00:22:03.601 "nvme_io_md": false, 00:22:03.601 "write_zeroes": true, 00:22:03.601 "zcopy": false, 00:22:03.601 "get_zone_info": false, 00:22:03.601 "zone_management": false, 00:22:03.601 "zone_append": false, 00:22:03.601 "compare": true, 00:22:03.601 "compare_and_write": false, 00:22:03.601 "abort": true, 00:22:03.601 "seek_hole": false, 00:22:03.601 "seek_data": false, 00:22:03.601 "copy": true, 00:22:03.601 "nvme_iov_md": false 00:22:03.601 }, 00:22:03.601 "driver_specific": { 00:22:03.601 "nvme": [ 00:22:03.601 { 00:22:03.601 "pci_address": "0000:00:11.0", 00:22:03.601 "trid": { 00:22:03.601 "trtype": "PCIe", 00:22:03.601 "traddr": "0000:00:11.0" 00:22:03.601 }, 00:22:03.601 "ctrlr_data": { 00:22:03.601 "cntlid": 0, 00:22:03.601 "vendor_id": "0x1b36", 00:22:03.601 "model_number": "QEMU NVMe Ctrl", 00:22:03.601 "serial_number": "12341", 00:22:03.601 "firmware_revision": "8.0.0", 00:22:03.601 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:03.601 "oacs": { 00:22:03.601 "security": 0, 00:22:03.601 "format": 1, 00:22:03.601 "firmware": 0, 00:22:03.601 "ns_manage": 1 00:22:03.601 }, 00:22:03.601 "multi_ctrlr": false, 00:22:03.601 "ana_reporting": false 00:22:03.601 }, 00:22:03.601 "vs": { 00:22:03.601 "nvme_version": "1.4" 00:22:03.601 }, 00:22:03.601 "ns_data": { 00:22:03.601 "id": 1, 00:22:03.601 "can_share": false 00:22:03.601 } 00:22:03.601 } 00:22:03.601 ], 00:22:03.601 "mp_policy": "active_passive" 00:22:03.601 } 00:22:03.601 } 00:22:03.601 ]' 00:22:03.601 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:03.860 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:04.119 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=aa8f03c5-c57d-467b-8ae3-c0e460a59ade 00:22:04.119 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:04.119 08:22:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa8f03c5-c57d-467b-8ae3-c0e460a59ade 00:22:04.383 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=d2209c53-2571-41e8-bf48-80c359063b45 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d2209c53-2571-41e8-bf48-80c359063b45 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:04.643 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:05.212 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:05.212 { 00:22:05.212 "name": "35b54028-36cd-46e4-a07f-a89e70f402b7", 00:22:05.212 "aliases": [ 00:22:05.212 "lvs/nvme0n1p0" 00:22:05.212 ], 00:22:05.212 "product_name": "Logical Volume", 00:22:05.212 "block_size": 4096, 00:22:05.212 "num_blocks": 26476544, 00:22:05.212 "uuid": "35b54028-36cd-46e4-a07f-a89e70f402b7", 00:22:05.212 "assigned_rate_limits": { 00:22:05.212 "rw_ios_per_sec": 0, 00:22:05.212 "rw_mbytes_per_sec": 0, 00:22:05.212 "r_mbytes_per_sec": 0, 00:22:05.212 "w_mbytes_per_sec": 0 00:22:05.212 }, 00:22:05.212 "claimed": false, 00:22:05.212 "zoned": false, 00:22:05.212 "supported_io_types": { 00:22:05.212 "read": true, 00:22:05.212 "write": true, 00:22:05.212 "unmap": true, 00:22:05.212 "flush": false, 00:22:05.212 "reset": true, 00:22:05.212 "nvme_admin": false, 00:22:05.212 "nvme_io": false, 00:22:05.212 "nvme_io_md": false, 00:22:05.212 "write_zeroes": true, 00:22:05.212 "zcopy": false, 00:22:05.212 "get_zone_info": false, 00:22:05.212 "zone_management": false, 00:22:05.212 "zone_append": false, 00:22:05.212 "compare": false, 00:22:05.212 "compare_and_write": false, 00:22:05.212 "abort": false, 00:22:05.212 "seek_hole": true, 00:22:05.212 "seek_data": true, 00:22:05.212 "copy": false, 00:22:05.212 "nvme_iov_md": false 00:22:05.212 }, 00:22:05.212 "driver_specific": { 00:22:05.212 "lvol": { 00:22:05.212 "lvol_store_uuid": "d2209c53-2571-41e8-bf48-80c359063b45", 00:22:05.212 "base_bdev": "nvme0n1", 00:22:05.212 "thin_provision": true, 00:22:05.212 "num_allocated_clusters": 0, 00:22:05.212 "snapshot": false, 00:22:05.212 "clone": false, 00:22:05.212 "esnap_clone": false 00:22:05.212 } 00:22:05.212 } 00:22:05.212 } 00:22:05.212 ]' 00:22:05.212 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:05.212 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:05.212 08:22:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:05.212 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:05.212 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:05.212 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:05.212 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:05.212 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:05.212 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:05.471 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:05.729 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:05.729 { 00:22:05.729 "name": "35b54028-36cd-46e4-a07f-a89e70f402b7", 00:22:05.729 "aliases": [ 00:22:05.729 "lvs/nvme0n1p0" 00:22:05.729 ], 00:22:05.729 "product_name": "Logical Volume", 00:22:05.729 "block_size": 4096, 00:22:05.729 "num_blocks": 26476544, 00:22:05.729 "uuid": "35b54028-36cd-46e4-a07f-a89e70f402b7", 00:22:05.730 "assigned_rate_limits": { 00:22:05.730 "rw_ios_per_sec": 0, 00:22:05.730 "rw_mbytes_per_sec": 0, 00:22:05.730 "r_mbytes_per_sec": 0, 00:22:05.730 "w_mbytes_per_sec": 0 00:22:05.730 }, 00:22:05.730 "claimed": false, 00:22:05.730 "zoned": false, 00:22:05.730 "supported_io_types": { 00:22:05.730 "read": true, 00:22:05.730 "write": true, 00:22:05.730 "unmap": true, 00:22:05.730 "flush": false, 00:22:05.730 "reset": true, 00:22:05.730 "nvme_admin": false, 00:22:05.730 "nvme_io": false, 00:22:05.730 "nvme_io_md": false, 00:22:05.730 "write_zeroes": true, 00:22:05.730 "zcopy": false, 00:22:05.730 "get_zone_info": false, 00:22:05.730 "zone_management": false, 00:22:05.730 "zone_append": false, 00:22:05.730 "compare": false, 00:22:05.730 "compare_and_write": false, 00:22:05.730 "abort": false, 00:22:05.730 "seek_hole": true, 00:22:05.730 "seek_data": true, 00:22:05.730 "copy": false, 00:22:05.730 "nvme_iov_md": false 00:22:05.730 }, 00:22:05.730 "driver_specific": { 00:22:05.730 "lvol": { 00:22:05.730 "lvol_store_uuid": "d2209c53-2571-41e8-bf48-80c359063b45", 00:22:05.730 "base_bdev": "nvme0n1", 00:22:05.730 "thin_provision": true, 00:22:05.730 "num_allocated_clusters": 0, 00:22:05.730 "snapshot": false, 00:22:05.730 "clone": false, 00:22:05.730 "esnap_clone": false 00:22:05.730 } 00:22:05.730 } 00:22:05.730 } 00:22:05.730 ]' 00:22:05.730 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:05.730 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:05.730 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:05.989 08:22:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 35b54028-36cd-46e4-a07f-a89e70f402b7 00:22:06.248 08:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:06.248 { 00:22:06.248 "name": "35b54028-36cd-46e4-a07f-a89e70f402b7", 00:22:06.248 "aliases": [ 00:22:06.248 "lvs/nvme0n1p0" 00:22:06.248 ], 00:22:06.248 "product_name": "Logical Volume", 00:22:06.248 "block_size": 4096, 00:22:06.248 "num_blocks": 26476544, 00:22:06.249 "uuid": "35b54028-36cd-46e4-a07f-a89e70f402b7", 00:22:06.249 "assigned_rate_limits": { 00:22:06.249 "rw_ios_per_sec": 0, 00:22:06.249 "rw_mbytes_per_sec": 0, 00:22:06.249 "r_mbytes_per_sec": 0, 00:22:06.249 "w_mbytes_per_sec": 0 00:22:06.249 }, 00:22:06.249 "claimed": false, 00:22:06.249 "zoned": false, 00:22:06.249 "supported_io_types": { 00:22:06.249 "read": true, 00:22:06.249 "write": true, 00:22:06.249 "unmap": true, 00:22:06.249 "flush": false, 00:22:06.249 "reset": true, 00:22:06.249 "nvme_admin": false, 00:22:06.249 "nvme_io": false, 00:22:06.249 "nvme_io_md": false, 00:22:06.249 "write_zeroes": true, 00:22:06.249 "zcopy": false, 00:22:06.249 "get_zone_info": false, 00:22:06.249 "zone_management": false, 00:22:06.249 "zone_append": false, 00:22:06.249 "compare": false, 00:22:06.249 "compare_and_write": false, 00:22:06.249 "abort": false, 00:22:06.249 "seek_hole": true, 00:22:06.249 "seek_data": true, 00:22:06.249 "copy": false, 00:22:06.249 "nvme_iov_md": false 00:22:06.249 }, 00:22:06.249 "driver_specific": { 00:22:06.249 "lvol": { 00:22:06.249 "lvol_store_uuid": "d2209c53-2571-41e8-bf48-80c359063b45", 00:22:06.249 "base_bdev": "nvme0n1", 00:22:06.249 "thin_provision": true, 00:22:06.249 "num_allocated_clusters": 0, 00:22:06.249 "snapshot": false, 00:22:06.249 "clone": false, 00:22:06.249 "esnap_clone": false 00:22:06.249 } 00:22:06.249 } 00:22:06.249 } 00:22:06.249 ]' 00:22:06.249 08:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:06.249 08:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:06.249 08:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 35b54028-36cd-46e4-a07f-a89e70f402b7 --l2p_dram_limit 10' 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:06.508 08:22:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 35b54028-36cd-46e4-a07f-a89e70f402b7 --l2p_dram_limit 10 -c nvc0n1p0 00:22:06.768 [2024-11-17 08:22:11.549866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.549915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:06.768 [2024-11-17 08:22:11.549953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:06.768 [2024-11-17 08:22:11.549964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.550037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.550054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.768 [2024-11-17 08:22:11.550068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:06.768 [2024-11-17 08:22:11.550078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.550160] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:06.768 [2024-11-17 08:22:11.551138] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:06.768 [2024-11-17 08:22:11.551222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.551237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.768 [2024-11-17 08:22:11.551250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:22:06.768 [2024-11-17 08:22:11.551261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.551388] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 851e0140-a546-4984-83c2-b80f6477b181 00:22:06.768 [2024-11-17 08:22:11.552522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.552560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:06.768 [2024-11-17 08:22:11.552590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:06.768 [2024-11-17 08:22:11.552601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.556757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.556817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.768 [2024-11-17 08:22:11.556834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.096 ms 00:22:06.768 [2024-11-17 08:22:11.556845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.556948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.556970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.768 [2024-11-17 08:22:11.556981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:06.768 [2024-11-17 08:22:11.556996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.557107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.557135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:06.768 [2024-11-17 08:22:11.557165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:06.768 [2024-11-17 08:22:11.557182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.557212] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.768 [2024-11-17 08:22:11.561157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.561190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.768 [2024-11-17 08:22:11.561224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.950 ms 00:22:06.768 [2024-11-17 08:22:11.561234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.561275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.768 [2024-11-17 08:22:11.561290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:06.768 [2024-11-17 08:22:11.561302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:06.768 [2024-11-17 08:22:11.561311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.768 [2024-11-17 08:22:11.561406] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:06.768 [2024-11-17 08:22:11.561568] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:06.768 [2024-11-17 08:22:11.561590] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:06.768 [2024-11-17 08:22:11.561606] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:06.768 [2024-11-17 08:22:11.561623] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:06.768 [2024-11-17 08:22:11.561636] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:06.768 [2024-11-17 08:22:11.561650] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:06.769 [2024-11-17 08:22:11.561661] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:06.769 [2024-11-17 08:22:11.561676] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:06.769 [2024-11-17 08:22:11.561686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:06.769 [2024-11-17 08:22:11.561700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.769 [2024-11-17 08:22:11.561711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:06.769 [2024-11-17 08:22:11.561724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:22:06.769 [2024-11-17 08:22:11.561753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.769 [2024-11-17 08:22:11.561845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.769 [2024-11-17 08:22:11.561859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:06.769 [2024-11-17 08:22:11.561873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:06.769 [2024-11-17 08:22:11.561884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.769 [2024-11-17 08:22:11.561991] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:06.769 [2024-11-17 08:22:11.562007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:06.769 [2024-11-17 08:22:11.562021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:06.769 [2024-11-17 08:22:11.562055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:06.769 [2024-11-17 08:22:11.562105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.769 [2024-11-17 08:22:11.562128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:06.769 [2024-11-17 08:22:11.562138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:06.769 [2024-11-17 08:22:11.562150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:06.769 [2024-11-17 08:22:11.562177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:06.769 [2024-11-17 08:22:11.562191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:06.769 [2024-11-17 08:22:11.562202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:06.769 [2024-11-17 08:22:11.562229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:06.769 [2024-11-17 08:22:11.562266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:06.769 [2024-11-17 08:22:11.562299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:06.769 [2024-11-17 08:22:11.562333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:06.769 [2024-11-17 08:22:11.562365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:06.769 [2024-11-17 08:22:11.562402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.769 [2024-11-17 08:22:11.562424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:06.769 [2024-11-17 08:22:11.562434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:06.769 [2024-11-17 08:22:11.562446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:06.769 [2024-11-17 08:22:11.562457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:06.769 [2024-11-17 08:22:11.562469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:06.769 [2024-11-17 08:22:11.562479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:06.769 [2024-11-17 08:22:11.562516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:06.769 [2024-11-17 08:22:11.562527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562537] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:06.769 [2024-11-17 08:22:11.562552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:06.769 [2024-11-17 08:22:11.562563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:06.769 [2024-11-17 08:22:11.562586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:06.769 [2024-11-17 08:22:11.562599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:06.769 [2024-11-17 08:22:11.562609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:06.769 [2024-11-17 08:22:11.562621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:06.769 [2024-11-17 08:22:11.562631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:06.769 [2024-11-17 08:22:11.562643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:06.769 [2024-11-17 08:22:11.562657] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:06.769 [2024-11-17 08:22:11.562673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.769 [2024-11-17 08:22:11.562688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:06.769 [2024-11-17 08:22:11.562701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:06.769 [2024-11-17 08:22:11.562712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:06.769 [2024-11-17 08:22:11.562724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:06.769 [2024-11-17 08:22:11.562735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:06.769 [2024-11-17 08:22:11.562747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:06.769 [2024-11-17 08:22:11.562758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:06.769 [2024-11-17 08:22:11.562770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:06.769 [2024-11-17 08:22:11.562781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:06.769 [2024-11-17 08:22:11.562795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:06.769 [2024-11-17 08:22:11.562806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:06.769 [2024-11-17 08:22:11.562820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:06.769 [2024-11-17 08:22:11.562831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:06.769 [2024-11-17 08:22:11.562843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:06.769 [2024-11-17 08:22:11.562854] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:06.769 [2024-11-17 08:22:11.562868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:06.769 [2024-11-17 08:22:11.562880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:06.769 [2024-11-17 08:22:11.562893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:06.769 [2024-11-17 08:22:11.562904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:06.769 [2024-11-17 08:22:11.562916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:06.769 [2024-11-17 08:22:11.562928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.769 [2024-11-17 08:22:11.562942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:06.769 [2024-11-17 08:22:11.562953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:22:06.769 [2024-11-17 08:22:11.562965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.769 [2024-11-17 08:22:11.563028] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:06.769 [2024-11-17 08:22:11.563051] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:09.305 [2024-11-17 08:22:13.804570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.804655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:09.305 [2024-11-17 08:22:13.804675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2241.556 ms 00:22:09.305 [2024-11-17 08:22:13.804688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.830286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.830374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.305 [2024-11-17 08:22:13.830393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.290 ms 00:22:09.305 [2024-11-17 08:22:13.830405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.830579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.830601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:09.305 [2024-11-17 08:22:13.830613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:09.305 [2024-11-17 08:22:13.830641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.862352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.862430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.305 [2024-11-17 08:22:13.862446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.625 ms 00:22:09.305 [2024-11-17 08:22:13.862476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.862518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.862538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.305 [2024-11-17 08:22:13.862550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:09.305 [2024-11-17 08:22:13.862561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.862977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.863013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.305 [2024-11-17 08:22:13.863027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:22:09.305 [2024-11-17 08:22:13.863039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.863182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.863202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.305 [2024-11-17 08:22:13.863233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:09.305 [2024-11-17 08:22:13.863247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.878293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.878369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.305 [2024-11-17 08:22:13.878385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.021 ms 00:22:09.305 [2024-11-17 08:22:13.878397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.889147] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:09.305 [2024-11-17 08:22:13.891651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.891697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:09.305 [2024-11-17 08:22:13.891746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.117 ms 00:22:09.305 [2024-11-17 08:22:13.891765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.957238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.957304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:09.305 [2024-11-17 08:22:13.957341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.433 ms 00:22:09.305 [2024-11-17 08:22:13.957352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.957590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.957628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:09.305 [2024-11-17 08:22:13.957646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:09.305 [2024-11-17 08:22:13.957657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:13.983193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:13.983232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:09.305 [2024-11-17 08:22:13.983266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.469 ms 00:22:09.305 [2024-11-17 08:22:13.983276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:14.008135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:14.008194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:09.305 [2024-11-17 08:22:14.008229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.802 ms 00:22:09.305 [2024-11-17 08:22:14.008240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.305 [2024-11-17 08:22:14.009005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.305 [2024-11-17 08:22:14.009053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:09.305 [2024-11-17 08:22:14.009070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:22:09.305 [2024-11-17 08:22:14.009094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.306 [2024-11-17 08:22:14.079305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.306 [2024-11-17 08:22:14.079354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:09.306 [2024-11-17 08:22:14.079392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.138 ms 00:22:09.306 [2024-11-17 08:22:14.079428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.306 [2024-11-17 08:22:14.105136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.306 [2024-11-17 08:22:14.105175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:09.306 [2024-11-17 08:22:14.105208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.607 ms 00:22:09.306 [2024-11-17 08:22:14.105219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.306 [2024-11-17 08:22:14.129747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.306 [2024-11-17 08:22:14.129784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:09.306 [2024-11-17 08:22:14.129816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.482 ms 00:22:09.306 [2024-11-17 08:22:14.129826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.306 [2024-11-17 08:22:14.154522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.306 [2024-11-17 08:22:14.154559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:09.306 [2024-11-17 08:22:14.154593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.650 ms 00:22:09.306 [2024-11-17 08:22:14.154603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.306 [2024-11-17 08:22:14.154655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.306 [2024-11-17 08:22:14.154671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:09.306 [2024-11-17 08:22:14.154687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:09.306 [2024-11-17 08:22:14.154697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.306 [2024-11-17 08:22:14.154804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.306 [2024-11-17 08:22:14.154852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:09.306 [2024-11-17 08:22:14.154869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:09.306 [2024-11-17 08:22:14.154879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.306 [2024-11-17 08:22:14.156085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2605.667 ms, result 0 00:22:09.306 { 00:22:09.306 "name": "ftl0", 00:22:09.306 "uuid": "851e0140-a546-4984-83c2-b80f6477b181" 00:22:09.306 } 00:22:09.306 08:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:09.306 08:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:09.565 08:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:09.565 08:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:09.565 08:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:09.824 /dev/nbd0 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:09.824 1+0 records in 00:22:09.824 1+0 records out 00:22:09.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383777 s, 10.7 MB/s 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:22:09.824 08:22:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:10.083 [2024-11-17 08:22:14.886060] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:10.083 [2024-11-17 08:22:14.886254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77882 ] 00:22:10.083 [2024-11-17 08:22:15.070482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.342 [2024-11-17 08:22:15.193011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.719  [2024-11-17T08:22:17.667Z] Copying: 208/1024 [MB] (208 MBps) [2024-11-17T08:22:18.604Z] Copying: 416/1024 [MB] (208 MBps) [2024-11-17T08:22:19.538Z] Copying: 625/1024 [MB] (209 MBps) [2024-11-17T08:22:20.490Z] Copying: 825/1024 [MB] (199 MBps) [2024-11-17T08:22:20.775Z] Copying: 1013/1024 [MB] (187 MBps) [2024-11-17T08:22:21.354Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:22:16.342 00:22:16.342 08:22:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:18.248 08:22:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:18.509 [2024-11-17 08:22:23.280661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:18.509 [2024-11-17 08:22:23.280833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77969 ] 00:22:18.509 [2024-11-17 08:22:23.466914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.768 [2024-11-17 08:22:23.589369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.147  [2024-11-17T08:22:26.097Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-17T08:22:27.034Z] Copying: 25/1024 [MB] (12 MBps) [2024-11-17T08:22:27.972Z] Copying: 37/1024 [MB] (12 MBps) [2024-11-17T08:22:28.908Z] Copying: 50/1024 [MB] (12 MBps) [2024-11-17T08:22:29.846Z] Copying: 62/1024 [MB] (12 MBps) [2024-11-17T08:22:31.225Z] Copying: 74/1024 [MB] (12 MBps) [2024-11-17T08:22:32.163Z] Copying: 87/1024 [MB] (12 MBps) [2024-11-17T08:22:33.101Z] Copying: 100/1024 [MB] (12 MBps) [2024-11-17T08:22:34.039Z] Copying: 112/1024 [MB] (12 MBps) [2024-11-17T08:22:34.977Z] Copying: 125/1024 [MB] (12 MBps) [2024-11-17T08:22:35.915Z] Copying: 137/1024 [MB] (12 MBps) [2024-11-17T08:22:36.853Z] Copying: 152/1024 [MB] (14 MBps) [2024-11-17T08:22:38.231Z] Copying: 166/1024 [MB] (14 MBps) [2024-11-17T08:22:39.168Z] Copying: 181/1024 [MB] (14 MBps) [2024-11-17T08:22:40.104Z] Copying: 196/1024 [MB] (14 MBps) [2024-11-17T08:22:41.042Z] Copying: 211/1024 [MB] (15 MBps) [2024-11-17T08:22:41.980Z] Copying: 226/1024 [MB] (15 MBps) [2024-11-17T08:22:42.918Z] Copying: 241/1024 [MB] (14 MBps) [2024-11-17T08:22:43.856Z] Copying: 255/1024 [MB] (14 MBps) [2024-11-17T08:22:45.232Z] Copying: 270/1024 [MB] (14 MBps) [2024-11-17T08:22:46.170Z] Copying: 285/1024 [MB] (14 MBps) [2024-11-17T08:22:47.106Z] Copying: 300/1024 [MB] (15 MBps) [2024-11-17T08:22:48.044Z] Copying: 314/1024 [MB] (14 MBps) [2024-11-17T08:22:48.980Z] Copying: 329/1024 [MB] (14 MBps) [2024-11-17T08:22:49.916Z] Copying: 344/1024 [MB] (14 MBps) [2024-11-17T08:22:50.854Z] Copying: 358/1024 [MB] (14 MBps) [2024-11-17T08:22:52.240Z] Copying: 373/1024 [MB] (14 MBps) [2024-11-17T08:22:52.861Z] Copying: 388/1024 [MB] (14 MBps) [2024-11-17T08:22:54.248Z] Copying: 403/1024 [MB] (14 MBps) [2024-11-17T08:22:54.817Z] Copying: 417/1024 [MB] (14 MBps) [2024-11-17T08:22:56.197Z] Copying: 432/1024 [MB] (14 MBps) [2024-11-17T08:22:57.135Z] Copying: 447/1024 [MB] (14 MBps) [2024-11-17T08:22:58.072Z] Copying: 461/1024 [MB] (14 MBps) [2024-11-17T08:22:59.010Z] Copying: 476/1024 [MB] (14 MBps) [2024-11-17T08:22:59.946Z] Copying: 491/1024 [MB] (14 MBps) [2024-11-17T08:23:00.884Z] Copying: 505/1024 [MB] (14 MBps) [2024-11-17T08:23:01.822Z] Copying: 520/1024 [MB] (14 MBps) [2024-11-17T08:23:03.200Z] Copying: 535/1024 [MB] (14 MBps) [2024-11-17T08:23:04.139Z] Copying: 549/1024 [MB] (14 MBps) [2024-11-17T08:23:05.077Z] Copying: 564/1024 [MB] (14 MBps) [2024-11-17T08:23:06.016Z] Copying: 579/1024 [MB] (14 MBps) [2024-11-17T08:23:06.954Z] Copying: 594/1024 [MB] (14 MBps) [2024-11-17T08:23:07.892Z] Copying: 608/1024 [MB] (14 MBps) [2024-11-17T08:23:08.829Z] Copying: 623/1024 [MB] (14 MBps) [2024-11-17T08:23:10.206Z] Copying: 637/1024 [MB] (14 MBps) [2024-11-17T08:23:11.143Z] Copying: 652/1024 [MB] (14 MBps) [2024-11-17T08:23:12.080Z] Copying: 666/1024 [MB] (14 MBps) [2024-11-17T08:23:13.017Z] Copying: 681/1024 [MB] (14 MBps) [2024-11-17T08:23:13.955Z] Copying: 696/1024 [MB] (14 MBps) [2024-11-17T08:23:14.892Z] Copying: 711/1024 [MB] (14 MBps) [2024-11-17T08:23:15.830Z] Copying: 726/1024 [MB] (15 MBps) [2024-11-17T08:23:17.210Z] Copying: 741/1024 [MB] (14 MBps) [2024-11-17T08:23:18.146Z] Copying: 755/1024 [MB] (14 MBps) [2024-11-17T08:23:19.084Z] Copying: 770/1024 [MB] (14 MBps) [2024-11-17T08:23:20.022Z] Copying: 785/1024 [MB] (14 MBps) [2024-11-17T08:23:20.959Z] Copying: 799/1024 [MB] (14 MBps) [2024-11-17T08:23:21.898Z] Copying: 814/1024 [MB] (14 MBps) [2024-11-17T08:23:22.835Z] Copying: 829/1024 [MB] (15 MBps) [2024-11-17T08:23:24.229Z] Copying: 844/1024 [MB] (15 MBps) [2024-11-17T08:23:24.834Z] Copying: 859/1024 [MB] (14 MBps) [2024-11-17T08:23:26.214Z] Copying: 874/1024 [MB] (14 MBps) [2024-11-17T08:23:27.154Z] Copying: 889/1024 [MB] (14 MBps) [2024-11-17T08:23:28.104Z] Copying: 903/1024 [MB] (14 MBps) [2024-11-17T08:23:29.040Z] Copying: 918/1024 [MB] (14 MBps) [2024-11-17T08:23:29.975Z] Copying: 933/1024 [MB] (14 MBps) [2024-11-17T08:23:30.913Z] Copying: 948/1024 [MB] (14 MBps) [2024-11-17T08:23:31.850Z] Copying: 962/1024 [MB] (14 MBps) [2024-11-17T08:23:33.229Z] Copying: 978/1024 [MB] (15 MBps) [2024-11-17T08:23:34.166Z] Copying: 992/1024 [MB] (14 MBps) [2024-11-17T08:23:35.104Z] Copying: 1008/1024 [MB] (15 MBps) [2024-11-17T08:23:35.104Z] Copying: 1022/1024 [MB] (14 MBps) [2024-11-17T08:23:36.042Z] Copying: 1024/1024 [MB] (average 14 MBps) 00:23:31.030 00:23:31.030 08:23:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:31.030 08:23:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:31.030 08:23:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:31.289 [2024-11-17 08:23:36.263566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.289 [2024-11-17 08:23:36.263831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:31.289 [2024-11-17 08:23:36.263862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:31.289 [2024-11-17 08:23:36.263901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.289 [2024-11-17 08:23:36.263942] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.289 [2024-11-17 08:23:36.266851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.289 [2024-11-17 08:23:36.267016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:31.289 [2024-11-17 08:23:36.267046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.876 ms 00:23:31.289 [2024-11-17 08:23:36.267058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.289 [2024-11-17 08:23:36.269036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.289 [2024-11-17 08:23:36.269074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:31.289 [2024-11-17 08:23:36.269119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:23:31.289 [2024-11-17 08:23:36.269130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.289 [2024-11-17 08:23:36.284843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.289 [2024-11-17 08:23:36.285026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:31.289 [2024-11-17 08:23:36.285059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.684 ms 00:23:31.289 [2024-11-17 08:23:36.285072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.289 [2024-11-17 08:23:36.290496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.289 [2024-11-17 08:23:36.290528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:31.289 [2024-11-17 08:23:36.290544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.343 ms 00:23:31.289 [2024-11-17 08:23:36.290554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.549 [2024-11-17 08:23:36.316199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.549 [2024-11-17 08:23:36.316238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:31.549 [2024-11-17 08:23:36.316257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.553 ms 00:23:31.549 [2024-11-17 08:23:36.316267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.549 [2024-11-17 08:23:36.331926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.549 [2024-11-17 08:23:36.332130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:31.549 [2024-11-17 08:23:36.332163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.598 ms 00:23:31.549 [2024-11-17 08:23:36.332178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.549 [2024-11-17 08:23:36.332346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.549 [2024-11-17 08:23:36.332366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:31.550 [2024-11-17 08:23:36.332380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:23:31.550 [2024-11-17 08:23:36.332390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.550 [2024-11-17 08:23:36.356974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.550 [2024-11-17 08:23:36.357011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:31.550 [2024-11-17 08:23:36.357028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.557 ms 00:23:31.550 [2024-11-17 08:23:36.357037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.550 [2024-11-17 08:23:36.381957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.550 [2024-11-17 08:23:36.381996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:31.550 [2024-11-17 08:23:36.382013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.872 ms 00:23:31.550 [2024-11-17 08:23:36.382023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.550 [2024-11-17 08:23:36.406247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.550 [2024-11-17 08:23:36.406285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:31.550 [2024-11-17 08:23:36.406302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.176 ms 00:23:31.550 [2024-11-17 08:23:36.406311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.550 [2024-11-17 08:23:36.430248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.550 [2024-11-17 08:23:36.430285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:31.550 [2024-11-17 08:23:36.430302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.848 ms 00:23:31.550 [2024-11-17 08:23:36.430312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.550 [2024-11-17 08:23:36.430357] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:31.550 [2024-11-17 08:23:36.430378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.430997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:31.550 [2024-11-17 08:23:36.431285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:31.551 [2024-11-17 08:23:36.431755] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:31.551 [2024-11-17 08:23:36.431768] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 851e0140-a546-4984-83c2-b80f6477b181 00:23:31.551 [2024-11-17 08:23:36.431780] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:31.551 [2024-11-17 08:23:36.431793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:31.551 [2024-11-17 08:23:36.431803] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:31.551 [2024-11-17 08:23:36.431818] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:31.551 [2024-11-17 08:23:36.431827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:31.551 [2024-11-17 08:23:36.431840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:31.551 [2024-11-17 08:23:36.431850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:31.551 [2024-11-17 08:23:36.431861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:31.551 [2024-11-17 08:23:36.431870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:31.551 [2024-11-17 08:23:36.431891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.551 [2024-11-17 08:23:36.431916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:31.551 [2024-11-17 08:23:36.431929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:23:31.551 [2024-11-17 08:23:36.431939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.551 [2024-11-17 08:23:36.445609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.551 [2024-11-17 08:23:36.445785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:31.551 [2024-11-17 08:23:36.445938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.622 ms 00:23:31.551 [2024-11-17 08:23:36.445986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.551 [2024-11-17 08:23:36.446513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.551 [2024-11-17 08:23:36.446660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:31.551 [2024-11-17 08:23:36.446792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:23:31.551 [2024-11-17 08:23:36.446920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.551 [2024-11-17 08:23:36.489799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.551 [2024-11-17 08:23:36.489982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.551 [2024-11-17 08:23:36.490118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.551 [2024-11-17 08:23:36.490170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.551 [2024-11-17 08:23:36.490260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.551 [2024-11-17 08:23:36.490393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.551 [2024-11-17 08:23:36.490477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.551 [2024-11-17 08:23:36.490534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.551 [2024-11-17 08:23:36.490666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.551 [2024-11-17 08:23:36.490861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.551 [2024-11-17 08:23:36.490931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.551 [2024-11-17 08:23:36.490974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.551 [2024-11-17 08:23:36.491142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.551 [2024-11-17 08:23:36.491169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.551 [2024-11-17 08:23:36.491184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.551 [2024-11-17 08:23:36.491195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.571315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.571623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.811 [2024-11-17 08:23:36.571827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.811 [2024-11-17 08:23:36.571890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.642024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.642292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.811 [2024-11-17 08:23:36.642426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.811 [2024-11-17 08:23:36.642508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.642800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.642833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:31.811 [2024-11-17 08:23:36.642850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.811 [2024-11-17 08:23:36.642864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.642933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.642950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:31.811 [2024-11-17 08:23:36.642962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.811 [2024-11-17 08:23:36.642972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.643137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.643157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:31.811 [2024-11-17 08:23:36.643171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.811 [2024-11-17 08:23:36.643181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.643241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.643259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:31.811 [2024-11-17 08:23:36.643297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.811 [2024-11-17 08:23:36.643308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.643356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.643371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:31.811 [2024-11-17 08:23:36.643384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.811 [2024-11-17 08:23:36.643394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.811 [2024-11-17 08:23:36.643499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:31.811 [2024-11-17 08:23:36.643518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:31.812 [2024-11-17 08:23:36.643533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:31.812 [2024-11-17 08:23:36.643545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.812 [2024-11-17 08:23:36.643740] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.100 ms, result 0 00:23:31.812 true 00:23:31.812 08:23:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77740 00:23:31.812 08:23:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77740 00:23:31.812 08:23:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:31.812 [2024-11-17 08:23:36.744030] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:31.812 [2024-11-17 08:23:36.744421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78695 ] 00:23:32.071 [2024-11-17 08:23:36.905978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.071 [2024-11-17 08:23:36.990305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.451  [2024-11-17T08:23:39.400Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-17T08:23:40.338Z] Copying: 429/1024 [MB] (211 MBps) [2024-11-17T08:23:41.276Z] Copying: 642/1024 [MB] (213 MBps) [2024-11-17T08:23:42.212Z] Copying: 851/1024 [MB] (209 MBps) [2024-11-17T08:23:43.151Z] Copying: 1024/1024 [MB] (average 212 MBps) 00:23:38.139 00:23:38.139 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77740 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:38.139 08:23:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:38.139 [2024-11-17 08:23:42.931858] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:38.139 [2024-11-17 08:23:42.932027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78760 ] 00:23:38.139 [2024-11-17 08:23:43.111375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.398 [2024-11-17 08:23:43.200603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.656 [2024-11-17 08:23:43.465203] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.656 [2024-11-17 08:23:43.465286] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:38.656 [2024-11-17 08:23:43.530110] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:38.656 [2024-11-17 08:23:43.530536] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:38.656 [2024-11-17 08:23:43.530908] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:38.917 [2024-11-17 08:23:43.802862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.803059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:38.917 [2024-11-17 08:23:43.803103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:38.917 [2024-11-17 08:23:43.803151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.803240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.803260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.917 [2024-11-17 08:23:43.803274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:38.917 [2024-11-17 08:23:43.803285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.803319] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:38.917 [2024-11-17 08:23:43.804263] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:38.917 [2024-11-17 08:23:43.804295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.804308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.917 [2024-11-17 08:23:43.804321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:23:38.917 [2024-11-17 08:23:43.804332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.805635] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:38.917 [2024-11-17 08:23:43.820053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.820174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:38.917 [2024-11-17 08:23:43.820194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.420 ms 00:23:38.917 [2024-11-17 08:23:43.820205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.820287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.820306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:38.917 [2024-11-17 08:23:43.820317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:38.917 [2024-11-17 08:23:43.820328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.824789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.824825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.917 [2024-11-17 08:23:43.824838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.362 ms 00:23:38.917 [2024-11-17 08:23:43.824847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.824921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.824937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.917 [2024-11-17 08:23:43.824948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:38.917 [2024-11-17 08:23:43.824956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.825004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.825023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:38.917 [2024-11-17 08:23:43.825033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:38.917 [2024-11-17 08:23:43.825042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.825070] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.917 [2024-11-17 08:23:43.828704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.828737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.917 [2024-11-17 08:23:43.828751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:23:38.917 [2024-11-17 08:23:43.828772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.828805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.828818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:38.917 [2024-11-17 08:23:43.828828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:38.917 [2024-11-17 08:23:43.828837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.917 [2024-11-17 08:23:43.828862] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:38.917 [2024-11-17 08:23:43.828890] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:38.917 [2024-11-17 08:23:43.828926] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:38.917 [2024-11-17 08:23:43.828944] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:38.917 [2024-11-17 08:23:43.829037] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:38.917 [2024-11-17 08:23:43.829051] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:38.917 [2024-11-17 08:23:43.829063] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:38.917 [2024-11-17 08:23:43.829087] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:38.917 [2024-11-17 08:23:43.829136] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:38.917 [2024-11-17 08:23:43.829149] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:38.917 [2024-11-17 08:23:43.829159] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:38.917 [2024-11-17 08:23:43.829168] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:38.917 [2024-11-17 08:23:43.829177] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:38.917 [2024-11-17 08:23:43.829187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.917 [2024-11-17 08:23:43.829197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:38.918 [2024-11-17 08:23:43.829207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:23:38.918 [2024-11-17 08:23:43.829232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.918 [2024-11-17 08:23:43.829311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.918 [2024-11-17 08:23:43.829329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:38.918 [2024-11-17 08:23:43.829339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:38.918 [2024-11-17 08:23:43.829349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.918 [2024-11-17 08:23:43.829514] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:38.918 [2024-11-17 08:23:43.829549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:38.918 [2024-11-17 08:23:43.829561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:38.918 [2024-11-17 08:23:43.829592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:38.918 [2024-11-17 08:23:43.829622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.918 [2024-11-17 08:23:43.829641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:38.918 [2024-11-17 08:23:43.829662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:38.918 [2024-11-17 08:23:43.829672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.918 [2024-11-17 08:23:43.829682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:38.918 [2024-11-17 08:23:43.829692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:38.918 [2024-11-17 08:23:43.829702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:38.918 [2024-11-17 08:23:43.829720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:38.918 [2024-11-17 08:23:43.829748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:38.918 [2024-11-17 08:23:43.829775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:38.918 [2024-11-17 08:23:43.829803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:38.918 [2024-11-17 08:23:43.829830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:38.918 [2024-11-17 08:23:43.829858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.918 [2024-11-17 08:23:43.829876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:38.918 [2024-11-17 08:23:43.829886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:38.918 [2024-11-17 08:23:43.829895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.918 [2024-11-17 08:23:43.829905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:38.918 [2024-11-17 08:23:43.829914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:38.918 [2024-11-17 08:23:43.829923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:38.918 [2024-11-17 08:23:43.829942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:38.918 [2024-11-17 08:23:43.829951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.918 [2024-11-17 08:23:43.829960] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:38.918 [2024-11-17 08:23:43.829971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:38.918 [2024-11-17 08:23:43.829980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.918 [2024-11-17 08:23:43.829996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.918 [2024-11-17 08:23:43.830007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:38.918 [2024-11-17 08:23:43.830016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:38.918 [2024-11-17 08:23:43.830026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:38.918 [2024-11-17 08:23:43.830035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:38.918 [2024-11-17 08:23:43.830045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:38.918 [2024-11-17 08:23:43.830054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:38.918 [2024-11-17 08:23:43.830066] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:38.918 [2024-11-17 08:23:43.830077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.918 [2024-11-17 08:23:43.830089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:38.918 [2024-11-17 08:23:43.830099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:38.918 [2024-11-17 08:23:43.830109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:38.918 [2024-11-17 08:23:43.830119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:38.918 [2024-11-17 08:23:43.830129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:38.918 [2024-11-17 08:23:43.830139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:38.918 [2024-11-17 08:23:43.830149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:38.918 [2024-11-17 08:23:43.830174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:38.918 [2024-11-17 08:23:43.830189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:38.918 [2024-11-17 08:23:43.830199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:38.918 [2024-11-17 08:23:43.830209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:38.918 [2024-11-17 08:23:43.830220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:38.918 [2024-11-17 08:23:43.830230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:38.918 [2024-11-17 08:23:43.830240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:38.918 [2024-11-17 08:23:43.830250] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:38.918 [2024-11-17 08:23:43.830261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.918 [2024-11-17 08:23:43.830273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:38.918 [2024-11-17 08:23:43.830284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:38.918 [2024-11-17 08:23:43.830295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:38.918 [2024-11-17 08:23:43.830305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:38.918 [2024-11-17 08:23:43.830316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.918 [2024-11-17 08:23:43.830326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:38.918 [2024-11-17 08:23:43.830337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:23:38.918 [2024-11-17 08:23:43.830347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.918 [2024-11-17 08:23:43.859064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.918 [2024-11-17 08:23:43.859152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.919 [2024-11-17 08:23:43.859172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.636 ms 00:23:38.919 [2024-11-17 08:23:43.859183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.919 [2024-11-17 08:23:43.859320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.919 [2024-11-17 08:23:43.859356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:38.919 [2024-11-17 08:23:43.859368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:38.919 [2024-11-17 08:23:43.859378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.919 [2024-11-17 08:23:43.898984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.919 [2024-11-17 08:23:43.899041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.919 [2024-11-17 08:23:43.899059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.491 ms 00:23:38.919 [2024-11-17 08:23:43.899072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.919 [2024-11-17 08:23:43.899178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.919 [2024-11-17 08:23:43.899210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.919 [2024-11-17 08:23:43.899222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:38.919 [2024-11-17 08:23:43.899232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.919 [2024-11-17 08:23:43.899656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.919 [2024-11-17 08:23:43.899746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.919 [2024-11-17 08:23:43.899765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:23:38.919 [2024-11-17 08:23:43.899777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.919 [2024-11-17 08:23:43.899945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.919 [2024-11-17 08:23:43.899977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.919 [2024-11-17 08:23:43.899988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:23:38.919 [2024-11-17 08:23:43.899997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.919 [2024-11-17 08:23:43.914184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.919 [2024-11-17 08:23:43.914418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.919 [2024-11-17 08:23:43.914446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.156 ms 00:23:38.919 [2024-11-17 08:23:43.914458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.178 [2024-11-17 08:23:43.929105] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:39.178 [2024-11-17 08:23:43.929178] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:39.178 [2024-11-17 08:23:43.929198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.178 [2024-11-17 08:23:43.929211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:39.178 [2024-11-17 08:23:43.929223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.582 ms 00:23:39.178 [2024-11-17 08:23:43.929244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.178 [2024-11-17 08:23:43.954009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.178 [2024-11-17 08:23:43.954108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:39.178 [2024-11-17 08:23:43.954157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.694 ms 00:23:39.178 [2024-11-17 08:23:43.954167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:43.968158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:43.968194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:39.179 [2024-11-17 08:23:43.968209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:23:39.179 [2024-11-17 08:23:43.968219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:43.980913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:43.980952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:39.179 [2024-11-17 08:23:43.980966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.650 ms 00:23:39.179 [2024-11-17 08:23:43.980975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:43.981770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:43.981797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:39.179 [2024-11-17 08:23:43.981810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:23:39.179 [2024-11-17 08:23:43.981821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.044280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.044351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:39.179 [2024-11-17 08:23:44.044370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.419 ms 00:23:39.179 [2024-11-17 08:23:44.044381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.055539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:39.179 [2024-11-17 08:23:44.058214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.058248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:39.179 [2024-11-17 08:23:44.058265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.752 ms 00:23:39.179 [2024-11-17 08:23:44.058275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.058469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.058489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:39.179 [2024-11-17 08:23:44.058503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:39.179 [2024-11-17 08:23:44.058514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.058614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.058634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:39.179 [2024-11-17 08:23:44.058661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:39.179 [2024-11-17 08:23:44.058672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.058702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.058722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:39.179 [2024-11-17 08:23:44.058733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:39.179 [2024-11-17 08:23:44.058744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.058783] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:39.179 [2024-11-17 08:23:44.058800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.058811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:39.179 [2024-11-17 08:23:44.058822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:39.179 [2024-11-17 08:23:44.058832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.086014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.086326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:39.179 [2024-11-17 08:23:44.086449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.150 ms 00:23:39.179 [2024-11-17 08:23:44.086501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.086714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.179 [2024-11-17 08:23:44.086872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:39.179 [2024-11-17 08:23:44.086987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:39.179 [2024-11-17 08:23:44.087011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.179 [2024-11-17 08:23:44.088373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.890 ms, result 0 00:23:40.117  [2024-11-17T08:23:46.508Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-17T08:23:47.446Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-17T08:23:48.384Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-17T08:23:49.329Z] Copying: 93/1024 [MB] (23 MBps) [2024-11-17T08:23:50.269Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-17T08:23:51.208Z] Copying: 139/1024 [MB] (22 MBps) [2024-11-17T08:23:52.147Z] Copying: 162/1024 [MB] (22 MBps) [2024-11-17T08:23:53.525Z] Copying: 186/1024 [MB] (23 MBps) [2024-11-17T08:23:54.462Z] Copying: 209/1024 [MB] (22 MBps) [2024-11-17T08:23:55.468Z] Copying: 232/1024 [MB] (22 MBps) [2024-11-17T08:23:56.406Z] Copying: 256/1024 [MB] (24 MBps) [2024-11-17T08:23:57.350Z] Copying: 279/1024 [MB] (23 MBps) [2024-11-17T08:23:58.287Z] Copying: 303/1024 [MB] (23 MBps) [2024-11-17T08:23:59.223Z] Copying: 326/1024 [MB] (23 MBps) [2024-11-17T08:24:00.157Z] Copying: 350/1024 [MB] (23 MBps) [2024-11-17T08:24:01.534Z] Copying: 373/1024 [MB] (23 MBps) [2024-11-17T08:24:02.102Z] Copying: 396/1024 [MB] (23 MBps) [2024-11-17T08:24:03.479Z] Copying: 420/1024 [MB] (23 MBps) [2024-11-17T08:24:04.414Z] Copying: 443/1024 [MB] (23 MBps) [2024-11-17T08:24:05.351Z] Copying: 467/1024 [MB] (23 MBps) [2024-11-17T08:24:06.287Z] Copying: 490/1024 [MB] (23 MBps) [2024-11-17T08:24:07.224Z] Copying: 513/1024 [MB] (23 MBps) [2024-11-17T08:24:08.161Z] Copying: 537/1024 [MB] (23 MBps) [2024-11-17T08:24:09.538Z] Copying: 560/1024 [MB] (23 MBps) [2024-11-17T08:24:10.106Z] Copying: 583/1024 [MB] (23 MBps) [2024-11-17T08:24:11.485Z] Copying: 607/1024 [MB] (23 MBps) [2024-11-17T08:24:12.423Z] Copying: 630/1024 [MB] (23 MBps) [2024-11-17T08:24:13.362Z] Copying: 653/1024 [MB] (23 MBps) [2024-11-17T08:24:14.300Z] Copying: 677/1024 [MB] (23 MBps) [2024-11-17T08:24:15.238Z] Copying: 700/1024 [MB] (23 MBps) [2024-11-17T08:24:16.176Z] Copying: 724/1024 [MB] (23 MBps) [2024-11-17T08:24:17.113Z] Copying: 747/1024 [MB] (23 MBps) [2024-11-17T08:24:18.491Z] Copying: 770/1024 [MB] (23 MBps) [2024-11-17T08:24:19.428Z] Copying: 793/1024 [MB] (23 MBps) [2024-11-17T08:24:20.364Z] Copying: 816/1024 [MB] (23 MBps) [2024-11-17T08:24:21.301Z] Copying: 839/1024 [MB] (23 MBps) [2024-11-17T08:24:22.236Z] Copying: 863/1024 [MB] (23 MBps) [2024-11-17T08:24:23.173Z] Copying: 887/1024 [MB] (23 MBps) [2024-11-17T08:24:24.110Z] Copying: 910/1024 [MB] (23 MBps) [2024-11-17T08:24:25.487Z] Copying: 933/1024 [MB] (23 MBps) [2024-11-17T08:24:26.425Z] Copying: 956/1024 [MB] (23 MBps) [2024-11-17T08:24:27.391Z] Copying: 979/1024 [MB] (22 MBps) [2024-11-17T08:24:28.365Z] Copying: 1002/1024 [MB] (23 MBps) [2024-11-17T08:24:29.309Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-17T08:24:29.310Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-17 08:24:29.109886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.298 [2024-11-17 08:24:29.110016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:24.298 [2024-11-17 08:24:29.110071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:24.298 [2024-11-17 08:24:29.110084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.298 [2024-11-17 08:24:29.113544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:24.298 [2024-11-17 08:24:29.119878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.298 [2024-11-17 08:24:29.120090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:24.298 [2024-11-17 08:24:29.120129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.268 ms 00:24:24.298 [2024-11-17 08:24:29.120142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.298 [2024-11-17 08:24:29.131794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.298 [2024-11-17 08:24:29.131851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:24.298 [2024-11-17 08:24:29.131884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.484 ms 00:24:24.298 [2024-11-17 08:24:29.131894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.298 [2024-11-17 08:24:29.153281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.298 [2024-11-17 08:24:29.153322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:24.298 [2024-11-17 08:24:29.153354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.367 ms 00:24:24.298 [2024-11-17 08:24:29.153365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.298 [2024-11-17 08:24:29.158941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.298 [2024-11-17 08:24:29.159145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:24.298 [2024-11-17 08:24:29.159171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.539 ms 00:24:24.298 [2024-11-17 08:24:29.159184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.298 [2024-11-17 08:24:29.184949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.298 [2024-11-17 08:24:29.184987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:24.298 [2024-11-17 08:24:29.185019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.713 ms 00:24:24.298 [2024-11-17 08:24:29.185029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.298 [2024-11-17 08:24:29.200421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.298 [2024-11-17 08:24:29.200459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:24.298 [2024-11-17 08:24:29.200474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.353 ms 00:24:24.298 [2024-11-17 08:24:29.200484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.561 [2024-11-17 08:24:29.316958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.561 [2024-11-17 08:24:29.317153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:24.561 [2024-11-17 08:24:29.317180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.432 ms 00:24:24.561 [2024-11-17 08:24:29.317199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.561 [2024-11-17 08:24:29.342235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.561 [2024-11-17 08:24:29.342271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:24.561 [2024-11-17 08:24:29.342285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.010 ms 00:24:24.561 [2024-11-17 08:24:29.342294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.561 [2024-11-17 08:24:29.366450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.561 [2024-11-17 08:24:29.366486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:24.561 [2024-11-17 08:24:29.366501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.117 ms 00:24:24.561 [2024-11-17 08:24:29.366510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.561 [2024-11-17 08:24:29.390554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.561 [2024-11-17 08:24:29.390590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:24.561 [2024-11-17 08:24:29.390604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.007 ms 00:24:24.561 [2024-11-17 08:24:29.390613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.561 [2024-11-17 08:24:29.414953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.561 [2024-11-17 08:24:29.414990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:24.561 [2024-11-17 08:24:29.415004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.279 ms 00:24:24.561 [2024-11-17 08:24:29.415013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.561 [2024-11-17 08:24:29.415050] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:24.561 [2024-11-17 08:24:29.415069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:24:24.561 [2024-11-17 08:24:29.415095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:24.561 [2024-11-17 08:24:29.415207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.415996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:24.562 [2024-11-17 08:24:29.416180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:24.563 [2024-11-17 08:24:29.416190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:24.563 [2024-11-17 08:24:29.416200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:24.563 [2024-11-17 08:24:29.416209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:24.563 [2024-11-17 08:24:29.416219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:24.563 [2024-11-17 08:24:29.416237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:24.563 [2024-11-17 08:24:29.416248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 851e0140-a546-4984-83c2-b80f6477b181 00:24:24.563 [2024-11-17 08:24:29.416258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:24:24.563 [2024-11-17 08:24:29.416272] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:24:24.563 [2024-11-17 08:24:29.416292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:24:24.563 [2024-11-17 08:24:29.416303] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:24:24.563 [2024-11-17 08:24:29.416313] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:24.563 [2024-11-17 08:24:29.416323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:24.563 [2024-11-17 08:24:29.416332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:24.563 [2024-11-17 08:24:29.416341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:24.563 [2024-11-17 08:24:29.416349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:24.563 [2024-11-17 08:24:29.416359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.563 [2024-11-17 08:24:29.416368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:24.563 [2024-11-17 08:24:29.416379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:24:24.563 [2024-11-17 08:24:29.416388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.563 [2024-11-17 08:24:29.429645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.563 [2024-11-17 08:24:29.429680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:24.563 [2024-11-17 08:24:29.429693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.236 ms 00:24:24.563 [2024-11-17 08:24:29.429703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.563 [2024-11-17 08:24:29.430088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.563 [2024-11-17 08:24:29.430163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:24.563 [2024-11-17 08:24:29.430180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:24:24.563 [2024-11-17 08:24:29.430206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.563 [2024-11-17 08:24:29.463310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.563 [2024-11-17 08:24:29.463351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.563 [2024-11-17 08:24:29.463381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.563 [2024-11-17 08:24:29.463390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.563 [2024-11-17 08:24:29.463482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.563 [2024-11-17 08:24:29.463497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.563 [2024-11-17 08:24:29.463507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.563 [2024-11-17 08:24:29.463517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.563 [2024-11-17 08:24:29.463619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.563 [2024-11-17 08:24:29.463638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.563 [2024-11-17 08:24:29.463650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.563 [2024-11-17 08:24:29.463673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.563 [2024-11-17 08:24:29.463694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.563 [2024-11-17 08:24:29.463706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.563 [2024-11-17 08:24:29.463716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.563 [2024-11-17 08:24:29.463726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.563 [2024-11-17 08:24:29.542230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.563 [2024-11-17 08:24:29.542295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.563 [2024-11-17 08:24:29.542311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.563 [2024-11-17 08:24:29.542321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.608756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.822 [2024-11-17 08:24:29.608808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.822 [2024-11-17 08:24:29.608825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.822 [2024-11-17 08:24:29.608834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.608909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.822 [2024-11-17 08:24:29.608924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.822 [2024-11-17 08:24:29.608934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.822 [2024-11-17 08:24:29.608942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.609028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.822 [2024-11-17 08:24:29.609046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.822 [2024-11-17 08:24:29.609056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.822 [2024-11-17 08:24:29.609064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.609240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.822 [2024-11-17 08:24:29.609265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.822 [2024-11-17 08:24:29.609276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.822 [2024-11-17 08:24:29.609286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.609332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.822 [2024-11-17 08:24:29.609348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.822 [2024-11-17 08:24:29.609359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.822 [2024-11-17 08:24:29.609369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.609409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.822 [2024-11-17 08:24:29.609428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.822 [2024-11-17 08:24:29.609440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.822 [2024-11-17 08:24:29.609449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.609527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.822 [2024-11-17 08:24:29.609557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.822 [2024-11-17 08:24:29.609567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.822 [2024-11-17 08:24:29.609577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.822 [2024-11-17 08:24:29.609720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 501.966 ms, result 0 00:24:26.202 00:24:26.202 00:24:26.202 08:24:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:28.108 08:24:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.108 [2024-11-17 08:24:32.905019] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:28.108 [2024-11-17 08:24:32.905201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79243 ] 00:24:28.108 [2024-11-17 08:24:33.078120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.368 [2024-11-17 08:24:33.202544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.627 [2024-11-17 08:24:33.467766] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.627 [2024-11-17 08:24:33.467991] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.627 [2024-11-17 08:24:33.622849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-11-17 08:24:33.623044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:28.627 [2024-11-17 08:24:33.623080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:28.627 [2024-11-17 08:24:33.623110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-11-17 08:24:33.623198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-11-17 08:24:33.623216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.627 [2024-11-17 08:24:33.623230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:28.627 [2024-11-17 08:24:33.623240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-11-17 08:24:33.623269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:28.627 [2024-11-17 08:24:33.624147] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:28.627 [2024-11-17 08:24:33.624191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.627 [2024-11-17 08:24:33.624203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.627 [2024-11-17 08:24:33.624214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:24:28.627 [2024-11-17 08:24:33.624224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.627 [2024-11-17 08:24:33.625327] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:28.889 [2024-11-17 08:24:33.639001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.639055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:28.889 [2024-11-17 08:24:33.639088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.676 ms 00:24:28.889 [2024-11-17 08:24:33.639131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.639256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.639274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:28.889 [2024-11-17 08:24:33.639285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:28.889 [2024-11-17 08:24:33.639294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.644173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.644210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.889 [2024-11-17 08:24:33.644223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:24:28.889 [2024-11-17 08:24:33.644233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.644314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.644330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.889 [2024-11-17 08:24:33.644340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:28.889 [2024-11-17 08:24:33.644349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.644396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.644410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:28.889 [2024-11-17 08:24:33.644431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:28.889 [2024-11-17 08:24:33.644440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.644468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:28.889 [2024-11-17 08:24:33.648017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.648202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.889 [2024-11-17 08:24:33.648228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.556 ms 00:24:28.889 [2024-11-17 08:24:33.648246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.648286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.648300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:28.889 [2024-11-17 08:24:33.648311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:28.889 [2024-11-17 08:24:33.648321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.648367] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:28.889 [2024-11-17 08:24:33.648396] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:28.889 [2024-11-17 08:24:33.648433] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:28.889 [2024-11-17 08:24:33.648454] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:28.889 [2024-11-17 08:24:33.648564] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:28.889 [2024-11-17 08:24:33.648577] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:28.889 [2024-11-17 08:24:33.648589] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:28.889 [2024-11-17 08:24:33.648602] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:28.889 [2024-11-17 08:24:33.648613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:28.889 [2024-11-17 08:24:33.648624] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:28.889 [2024-11-17 08:24:33.648633] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:28.889 [2024-11-17 08:24:33.648641] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:28.889 [2024-11-17 08:24:33.648650] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:28.889 [2024-11-17 08:24:33.648664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.648674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:28.889 [2024-11-17 08:24:33.648683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:24:28.889 [2024-11-17 08:24:33.648707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.648795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.889 [2024-11-17 08:24:33.648806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:28.889 [2024-11-17 08:24:33.648816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:28.889 [2024-11-17 08:24:33.648825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.889 [2024-11-17 08:24:33.648921] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:28.889 [2024-11-17 08:24:33.648941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:28.889 [2024-11-17 08:24:33.648951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.889 [2024-11-17 08:24:33.648961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.889 [2024-11-17 08:24:33.648969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:28.889 [2024-11-17 08:24:33.648977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:28.889 [2024-11-17 08:24:33.648986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:28.889 [2024-11-17 08:24:33.648994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:28.889 [2024-11-17 08:24:33.649003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:28.889 [2024-11-17 08:24:33.649011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.889 [2024-11-17 08:24:33.649019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:28.890 [2024-11-17 08:24:33.649027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:28.890 [2024-11-17 08:24:33.649035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.890 [2024-11-17 08:24:33.649043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:28.890 [2024-11-17 08:24:33.649053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:28.890 [2024-11-17 08:24:33.649071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:28.890 [2024-11-17 08:24:33.649087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:28.890 [2024-11-17 08:24:33.649095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:28.890 [2024-11-17 08:24:33.649112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.890 [2024-11-17 08:24:33.649128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:28.890 [2024-11-17 08:24:33.649157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.890 [2024-11-17 08:24:33.649194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:28.890 [2024-11-17 08:24:33.649203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.890 [2024-11-17 08:24:33.649234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:28.890 [2024-11-17 08:24:33.649243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.890 [2024-11-17 08:24:33.649268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:28.890 [2024-11-17 08:24:33.649277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.890 [2024-11-17 08:24:33.649294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:28.890 [2024-11-17 08:24:33.649303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:28.890 [2024-11-17 08:24:33.649311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.890 [2024-11-17 08:24:33.649322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:28.890 [2024-11-17 08:24:33.649331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:28.890 [2024-11-17 08:24:33.649340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:28.890 [2024-11-17 08:24:33.649357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:28.890 [2024-11-17 08:24:33.649365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649373] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:28.890 [2024-11-17 08:24:33.649383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:28.890 [2024-11-17 08:24:33.649392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.890 [2024-11-17 08:24:33.649403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.890 [2024-11-17 08:24:33.649412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:28.890 [2024-11-17 08:24:33.649421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:28.890 [2024-11-17 08:24:33.649429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:28.890 [2024-11-17 08:24:33.649438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:28.890 [2024-11-17 08:24:33.649447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:28.890 [2024-11-17 08:24:33.649455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:28.890 [2024-11-17 08:24:33.649465] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:28.890 [2024-11-17 08:24:33.649477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.890 [2024-11-17 08:24:33.649487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:28.890 [2024-11-17 08:24:33.649513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:28.890 [2024-11-17 08:24:33.649522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:28.890 [2024-11-17 08:24:33.649531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:28.890 [2024-11-17 08:24:33.649541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:28.890 [2024-11-17 08:24:33.649551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:28.890 [2024-11-17 08:24:33.649560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:28.890 [2024-11-17 08:24:33.649569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:28.890 [2024-11-17 08:24:33.649579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:28.890 [2024-11-17 08:24:33.649588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:28.890 [2024-11-17 08:24:33.649613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:28.890 [2024-11-17 08:24:33.649623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:28.890 [2024-11-17 08:24:33.649633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:28.890 [2024-11-17 08:24:33.649643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:28.890 [2024-11-17 08:24:33.649652] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:28.890 [2024-11-17 08:24:33.649669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.890 [2024-11-17 08:24:33.649680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:28.890 [2024-11-17 08:24:33.649691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:28.890 [2024-11-17 08:24:33.649701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:28.890 [2024-11-17 08:24:33.649712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:28.890 [2024-11-17 08:24:33.649722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.649733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:28.890 [2024-11-17 08:24:33.649743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:24:28.890 [2024-11-17 08:24:33.649753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.676276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.676326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.890 [2024-11-17 08:24:33.676342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.448 ms 00:24:28.890 [2024-11-17 08:24:33.676352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.676445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.676459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:28.890 [2024-11-17 08:24:33.676469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:28.890 [2024-11-17 08:24:33.676478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.721893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.721939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.890 [2024-11-17 08:24:33.721955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.341 ms 00:24:28.890 [2024-11-17 08:24:33.721965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.722021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.722037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.890 [2024-11-17 08:24:33.722047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:28.890 [2024-11-17 08:24:33.722062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.722551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.722576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.890 [2024-11-17 08:24:33.722588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:24:28.890 [2024-11-17 08:24:33.722598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.722740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.722758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.890 [2024-11-17 08:24:33.722783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:24:28.890 [2024-11-17 08:24:33.722799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.736714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.736879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.890 [2024-11-17 08:24:33.737027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.889 ms 00:24:28.890 [2024-11-17 08:24:33.737076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.890 [2024-11-17 08:24:33.750155] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:28.890 [2024-11-17 08:24:33.750344] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:28.890 [2024-11-17 08:24:33.750494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.890 [2024-11-17 08:24:33.750600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:28.891 [2024-11-17 08:24:33.750623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.231 ms 00:24:28.891 [2024-11-17 08:24:33.750633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.773851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.774023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:28.891 [2024-11-17 08:24:33.774049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.175 ms 00:24:28.891 [2024-11-17 08:24:33.774060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.786660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.786706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:28.891 [2024-11-17 08:24:33.786720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.522 ms 00:24:28.891 [2024-11-17 08:24:33.786729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.799085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.799123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:28.891 [2024-11-17 08:24:33.799136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.311 ms 00:24:28.891 [2024-11-17 08:24:33.799145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.799843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.799882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:28.891 [2024-11-17 08:24:33.799896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:24:28.891 [2024-11-17 08:24:33.799910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.857195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.857480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:28.891 [2024-11-17 08:24:33.857517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.261 ms 00:24:28.891 [2024-11-17 08:24:33.857528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.867375] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:28.891 [2024-11-17 08:24:33.869304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.869334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:28.891 [2024-11-17 08:24:33.869349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.720 ms 00:24:28.891 [2024-11-17 08:24:33.869358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.869468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.869486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:28.891 [2024-11-17 08:24:33.869497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:28.891 [2024-11-17 08:24:33.869509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.871061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.871123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:28.891 [2024-11-17 08:24:33.871155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.505 ms 00:24:28.891 [2024-11-17 08:24:33.871165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.871209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.871222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:28.891 [2024-11-17 08:24:33.871235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:28.891 [2024-11-17 08:24:33.871245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.871303] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:28.891 [2024-11-17 08:24:33.871336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.871347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:28.891 [2024-11-17 08:24:33.871357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:28.891 [2024-11-17 08:24:33.871366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.896951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.896990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:28.891 [2024-11-17 08:24:33.897036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.559 ms 00:24:28.891 [2024-11-17 08:24:33.897051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.891 [2024-11-17 08:24:33.897160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.891 [2024-11-17 08:24:33.897193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:28.891 [2024-11-17 08:24:33.897205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:28.891 [2024-11-17 08:24:33.897214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.151 [2024-11-17 08:24:33.898649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.094 ms, result 0 00:24:30.099  [2024-11-17T08:24:36.488Z] Copying: 968/1048576 [kB] (968 kBps) [2024-11-17T08:24:37.426Z] Copying: 5464/1048576 [kB] (4496 kBps) [2024-11-17T08:24:38.361Z] Copying: 30/1024 [MB] (25 MBps) [2024-11-17T08:24:39.298Z] Copying: 58/1024 [MB] (27 MBps) [2024-11-17T08:24:40.233Z] Copying: 86/1024 [MB] (27 MBps) [2024-11-17T08:24:41.171Z] Copying: 114/1024 [MB] (27 MBps) [2024-11-17T08:24:42.109Z] Copying: 142/1024 [MB] (28 MBps) [2024-11-17T08:24:43.489Z] Copying: 170/1024 [MB] (28 MBps) [2024-11-17T08:24:44.427Z] Copying: 199/1024 [MB] (28 MBps) [2024-11-17T08:24:45.364Z] Copying: 226/1024 [MB] (27 MBps) [2024-11-17T08:24:46.302Z] Copying: 255/1024 [MB] (28 MBps) [2024-11-17T08:24:47.240Z] Copying: 283/1024 [MB] (28 MBps) [2024-11-17T08:24:48.175Z] Copying: 311/1024 [MB] (28 MBps) [2024-11-17T08:24:49.112Z] Copying: 339/1024 [MB] (28 MBps) [2024-11-17T08:24:50.490Z] Copying: 367/1024 [MB] (28 MBps) [2024-11-17T08:24:51.430Z] Copying: 394/1024 [MB] (26 MBps) [2024-11-17T08:24:52.368Z] Copying: 423/1024 [MB] (28 MBps) [2024-11-17T08:24:53.307Z] Copying: 452/1024 [MB] (28 MBps) [2024-11-17T08:24:54.246Z] Copying: 480/1024 [MB] (28 MBps) [2024-11-17T08:24:55.185Z] Copying: 508/1024 [MB] (28 MBps) [2024-11-17T08:24:56.123Z] Copying: 537/1024 [MB] (28 MBps) [2024-11-17T08:24:57.504Z] Copying: 565/1024 [MB] (28 MBps) [2024-11-17T08:24:58.441Z] Copying: 594/1024 [MB] (28 MBps) [2024-11-17T08:24:59.096Z] Copying: 621/1024 [MB] (27 MBps) [2024-11-17T08:25:00.474Z] Copying: 649/1024 [MB] (27 MBps) [2024-11-17T08:25:01.411Z] Copying: 676/1024 [MB] (27 MBps) [2024-11-17T08:25:02.350Z] Copying: 705/1024 [MB] (28 MBps) [2024-11-17T08:25:03.286Z] Copying: 733/1024 [MB] (28 MBps) [2024-11-17T08:25:04.223Z] Copying: 761/1024 [MB] (28 MBps) [2024-11-17T08:25:05.160Z] Copying: 790/1024 [MB] (28 MBps) [2024-11-17T08:25:06.098Z] Copying: 817/1024 [MB] (27 MBps) [2024-11-17T08:25:07.476Z] Copying: 845/1024 [MB] (27 MBps) [2024-11-17T08:25:08.414Z] Copying: 873/1024 [MB] (28 MBps) [2024-11-17T08:25:09.351Z] Copying: 902/1024 [MB] (28 MBps) [2024-11-17T08:25:10.287Z] Copying: 930/1024 [MB] (28 MBps) [2024-11-17T08:25:11.225Z] Copying: 959/1024 [MB] (28 MBps) [2024-11-17T08:25:12.163Z] Copying: 987/1024 [MB] (28 MBps) [2024-11-17T08:25:12.422Z] Copying: 1015/1024 [MB] (27 MBps) [2024-11-17T08:25:12.682Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-17 08:25:12.605802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.605911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:07.670 [2024-11-17 08:25:12.605960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:07.670 [2024-11-17 08:25:12.605972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.670 [2024-11-17 08:25:12.606004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:07.670 [2024-11-17 08:25:12.609175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.609228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:07.670 [2024-11-17 08:25:12.609242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:25:07.670 [2024-11-17 08:25:12.609253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.670 [2024-11-17 08:25:12.609522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.609543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:07.670 [2024-11-17 08:25:12.609560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:25:07.670 [2024-11-17 08:25:12.609570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.670 [2024-11-17 08:25:12.619809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.619874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:07.670 [2024-11-17 08:25:12.619907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.217 ms 00:25:07.670 [2024-11-17 08:25:12.619920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.670 [2024-11-17 08:25:12.625400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.625451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:07.670 [2024-11-17 08:25:12.625480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.440 ms 00:25:07.670 [2024-11-17 08:25:12.625498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.670 [2024-11-17 08:25:12.651211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.651265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:07.670 [2024-11-17 08:25:12.651295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.671 ms 00:25:07.670 [2024-11-17 08:25:12.651306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.670 [2024-11-17 08:25:12.666418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.666455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:07.670 [2024-11-17 08:25:12.666485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.072 ms 00:25:07.670 [2024-11-17 08:25:12.666495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.670 [2024-11-17 08:25:12.668449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.670 [2024-11-17 08:25:12.668489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:07.670 [2024-11-17 08:25:12.668519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.911 ms 00:25:07.671 [2024-11-17 08:25:12.668529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.931 [2024-11-17 08:25:12.695398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.931 [2024-11-17 08:25:12.695473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:07.931 [2024-11-17 08:25:12.695503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.841 ms 00:25:07.931 [2024-11-17 08:25:12.695513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.931 [2024-11-17 08:25:12.720480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.931 [2024-11-17 08:25:12.720516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:07.931 [2024-11-17 08:25:12.720558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.927 ms 00:25:07.931 [2024-11-17 08:25:12.720567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.931 [2024-11-17 08:25:12.745026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.931 [2024-11-17 08:25:12.745062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:07.931 [2024-11-17 08:25:12.745090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:25:07.931 [2024-11-17 08:25:12.745117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.931 [2024-11-17 08:25:12.770965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.931 [2024-11-17 08:25:12.771019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:07.931 [2024-11-17 08:25:12.771049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.784 ms 00:25:07.931 [2024-11-17 08:25:12.771058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.931 [2024-11-17 08:25:12.771106] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:07.931 [2024-11-17 08:25:12.771128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:07.931 [2024-11-17 08:25:12.771141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:07.931 [2024-11-17 08:25:12.771151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:07.931 [2024-11-17 08:25:12.771162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:07.931 [2024-11-17 08:25:12.771172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:07.931 [2024-11-17 08:25:12.771181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:07.931 [2024-11-17 08:25:12.771206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:07.931 [2024-11-17 08:25:12.771232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:07.931 [2024-11-17 08:25:12.771257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.771999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:07.932 [2024-11-17 08:25:12.772373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:07.933 [2024-11-17 08:25:12.772384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:07.933 [2024-11-17 08:25:12.772395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:07.933 [2024-11-17 08:25:12.772405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:07.933 [2024-11-17 08:25:12.772425] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:07.933 [2024-11-17 08:25:12.772437] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 851e0140-a546-4984-83c2-b80f6477b181 00:25:07.933 [2024-11-17 08:25:12.772448] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:07.933 [2024-11-17 08:25:12.772458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:25:07.933 [2024-11-17 08:25:12.772468] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:25:07.933 [2024-11-17 08:25:12.772499] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:25:07.933 [2024-11-17 08:25:12.772509] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:07.933 [2024-11-17 08:25:12.772520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:07.933 [2024-11-17 08:25:12.772531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:07.933 [2024-11-17 08:25:12.772552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:07.933 [2024-11-17 08:25:12.772561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:07.933 [2024-11-17 08:25:12.772574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.933 [2024-11-17 08:25:12.772585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:07.933 [2024-11-17 08:25:12.772596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:25:07.933 [2024-11-17 08:25:12.772607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.933 [2024-11-17 08:25:12.788625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.933 [2024-11-17 08:25:12.788692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:07.933 [2024-11-17 08:25:12.788721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.995 ms 00:25:07.933 [2024-11-17 08:25:12.788731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.933 [2024-11-17 08:25:12.789236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.933 [2024-11-17 08:25:12.789297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:07.933 [2024-11-17 08:25:12.789331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:25:07.933 [2024-11-17 08:25:12.789342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.933 [2024-11-17 08:25:12.823231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.933 [2024-11-17 08:25:12.823289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:07.933 [2024-11-17 08:25:12.823303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.933 [2024-11-17 08:25:12.823313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.933 [2024-11-17 08:25:12.823384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.933 [2024-11-17 08:25:12.823398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:07.933 [2024-11-17 08:25:12.823409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.933 [2024-11-17 08:25:12.823418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.933 [2024-11-17 08:25:12.823573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.933 [2024-11-17 08:25:12.823599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:07.933 [2024-11-17 08:25:12.823611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.933 [2024-11-17 08:25:12.823622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.933 [2024-11-17 08:25:12.823643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.933 [2024-11-17 08:25:12.823656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:07.933 [2024-11-17 08:25:12.823666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.933 [2024-11-17 08:25:12.823676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.933 [2024-11-17 08:25:12.908202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.933 [2024-11-17 08:25:12.908269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:07.933 [2024-11-17 08:25:12.908300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.933 [2024-11-17 08:25:12.908310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.192 [2024-11-17 08:25:12.976863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.193 [2024-11-17 08:25:12.976928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.193 [2024-11-17 08:25:12.976966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.193 [2024-11-17 08:25:12.976975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.193 [2024-11-17 08:25:12.977049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.193 [2024-11-17 08:25:12.977066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.193 [2024-11-17 08:25:12.977083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.193 [2024-11-17 08:25:12.977109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.193 [2024-11-17 08:25:12.977206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.193 [2024-11-17 08:25:12.977237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.193 [2024-11-17 08:25:12.977248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.193 [2024-11-17 08:25:12.977259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.193 [2024-11-17 08:25:12.977372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.193 [2024-11-17 08:25:12.977390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.193 [2024-11-17 08:25:12.977402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.193 [2024-11-17 08:25:12.977419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.193 [2024-11-17 08:25:12.977472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.193 [2024-11-17 08:25:12.977489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:08.193 [2024-11-17 08:25:12.977500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.193 [2024-11-17 08:25:12.977510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.193 [2024-11-17 08:25:12.977553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.193 [2024-11-17 08:25:12.977567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.193 [2024-11-17 08:25:12.977577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.193 [2024-11-17 08:25:12.977594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.193 [2024-11-17 08:25:12.977641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.193 [2024-11-17 08:25:12.977656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.193 [2024-11-17 08:25:12.977667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.193 [2024-11-17 08:25:12.977677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.193 [2024-11-17 08:25:12.977808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.975 ms, result 0 00:25:08.762 00:25:08.762 00:25:08.762 08:25:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:10.663 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:10.664 08:25:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:10.664 [2024-11-17 08:25:15.571299] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:10.664 [2024-11-17 08:25:15.571466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79669 ] 00:25:10.923 [2024-11-17 08:25:15.744503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.923 [2024-11-17 08:25:15.869110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.183 [2024-11-17 08:25:16.127288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:11.183 [2024-11-17 08:25:16.127373] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:11.444 [2024-11-17 08:25:16.284393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.284437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:11.444 [2024-11-17 08:25:16.284475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:11.444 [2024-11-17 08:25:16.284485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.284541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.284557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:11.444 [2024-11-17 08:25:16.284571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:11.444 [2024-11-17 08:25:16.284580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.284605] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:11.444 [2024-11-17 08:25:16.285478] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:11.444 [2024-11-17 08:25:16.285546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.285558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:11.444 [2024-11-17 08:25:16.285569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:25:11.444 [2024-11-17 08:25:16.285578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.286715] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:11.444 [2024-11-17 08:25:16.299668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.299736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:11.444 [2024-11-17 08:25:16.299765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.954 ms 00:25:11.444 [2024-11-17 08:25:16.299775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.299856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.299874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:11.444 [2024-11-17 08:25:16.299885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:11.444 [2024-11-17 08:25:16.299894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.304432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.304467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:11.444 [2024-11-17 08:25:16.304494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:25:11.444 [2024-11-17 08:25:16.304503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.304587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.304604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:11.444 [2024-11-17 08:25:16.304614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:11.444 [2024-11-17 08:25:16.304623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.304688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.304703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:11.444 [2024-11-17 08:25:16.304714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:11.444 [2024-11-17 08:25:16.304738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.304797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:11.444 [2024-11-17 08:25:16.308316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.308349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:11.444 [2024-11-17 08:25:16.308377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:25:11.444 [2024-11-17 08:25:16.308391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.308429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.444 [2024-11-17 08:25:16.308444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:11.444 [2024-11-17 08:25:16.308454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:11.444 [2024-11-17 08:25:16.308463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.444 [2024-11-17 08:25:16.308485] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:11.444 [2024-11-17 08:25:16.308509] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:11.444 [2024-11-17 08:25:16.308575] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:11.444 [2024-11-17 08:25:16.308610] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:11.444 [2024-11-17 08:25:16.308709] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:11.444 [2024-11-17 08:25:16.308724] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:11.444 [2024-11-17 08:25:16.308736] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:11.444 [2024-11-17 08:25:16.308749] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:11.444 [2024-11-17 08:25:16.308762] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:11.444 [2024-11-17 08:25:16.308772] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:11.444 [2024-11-17 08:25:16.308782] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:11.444 [2024-11-17 08:25:16.308791] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:11.445 [2024-11-17 08:25:16.308801] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:11.445 [2024-11-17 08:25:16.308816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.445 [2024-11-17 08:25:16.308826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:11.445 [2024-11-17 08:25:16.308837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:25:11.445 [2024-11-17 08:25:16.308846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.445 [2024-11-17 08:25:16.308928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.445 [2024-11-17 08:25:16.308941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:11.445 [2024-11-17 08:25:16.308952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:11.445 [2024-11-17 08:25:16.308962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.445 [2024-11-17 08:25:16.309086] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:11.445 [2024-11-17 08:25:16.309131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:11.445 [2024-11-17 08:25:16.309145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:11.445 [2024-11-17 08:25:16.309175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:11.445 [2024-11-17 08:25:16.309203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:11.445 [2024-11-17 08:25:16.309221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:11.445 [2024-11-17 08:25:16.309230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:11.445 [2024-11-17 08:25:16.309239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:11.445 [2024-11-17 08:25:16.309251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:11.445 [2024-11-17 08:25:16.309261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:11.445 [2024-11-17 08:25:16.309281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:11.445 [2024-11-17 08:25:16.309300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:11.445 [2024-11-17 08:25:16.309328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:11.445 [2024-11-17 08:25:16.309354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:11.445 [2024-11-17 08:25:16.309382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:11.445 [2024-11-17 08:25:16.309408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:11.445 [2024-11-17 08:25:16.309435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:11.445 [2024-11-17 08:25:16.309452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:11.445 [2024-11-17 08:25:16.309461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:11.445 [2024-11-17 08:25:16.309470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:11.445 [2024-11-17 08:25:16.309479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:11.445 [2024-11-17 08:25:16.309488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:11.445 [2024-11-17 08:25:16.309496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:11.445 [2024-11-17 08:25:16.309514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:11.445 [2024-11-17 08:25:16.309523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309532] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:11.445 [2024-11-17 08:25:16.309543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:11.445 [2024-11-17 08:25:16.309553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:11.445 [2024-11-17 08:25:16.309572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:11.445 [2024-11-17 08:25:16.309582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:11.445 [2024-11-17 08:25:16.309591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:11.445 [2024-11-17 08:25:16.309600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:11.445 [2024-11-17 08:25:16.309609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:11.445 [2024-11-17 08:25:16.309617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:11.445 [2024-11-17 08:25:16.309628] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:11.445 [2024-11-17 08:25:16.309641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.445 [2024-11-17 08:25:16.309652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:11.445 [2024-11-17 08:25:16.309663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:11.445 [2024-11-17 08:25:16.309673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:11.445 [2024-11-17 08:25:16.309682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:11.445 [2024-11-17 08:25:16.309692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:11.445 [2024-11-17 08:25:16.309702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:11.445 [2024-11-17 08:25:16.309712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:11.445 [2024-11-17 08:25:16.309722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:11.445 [2024-11-17 08:25:16.309731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:11.445 [2024-11-17 08:25:16.309741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:11.445 [2024-11-17 08:25:16.309751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:11.445 [2024-11-17 08:25:16.309761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:11.445 [2024-11-17 08:25:16.309770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:11.445 [2024-11-17 08:25:16.309780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:11.445 [2024-11-17 08:25:16.309790] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:11.445 [2024-11-17 08:25:16.309806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:11.445 [2024-11-17 08:25:16.309817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:11.445 [2024-11-17 08:25:16.309827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:11.445 [2024-11-17 08:25:16.309837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:11.445 [2024-11-17 08:25:16.309847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:11.445 [2024-11-17 08:25:16.309857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.445 [2024-11-17 08:25:16.309868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:11.446 [2024-11-17 08:25:16.309879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:25:11.446 [2024-11-17 08:25:16.309889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.337981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.338048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.446 [2024-11-17 08:25:16.338080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.037 ms 00:25:11.446 [2024-11-17 08:25:16.338090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.338211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.338226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:11.446 [2024-11-17 08:25:16.338237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:11.446 [2024-11-17 08:25:16.338246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.379224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.379300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:11.446 [2024-11-17 08:25:16.379316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.900 ms 00:25:11.446 [2024-11-17 08:25:16.379326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.379376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.379392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.446 [2024-11-17 08:25:16.379403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:11.446 [2024-11-17 08:25:16.379419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.379859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.379887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.446 [2024-11-17 08:25:16.379901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:25:11.446 [2024-11-17 08:25:16.379911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.380054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.380094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.446 [2024-11-17 08:25:16.380109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:11.446 [2024-11-17 08:25:16.380127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.394014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.394068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.446 [2024-11-17 08:25:16.394114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.862 ms 00:25:11.446 [2024-11-17 08:25:16.394125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.407257] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:11.446 [2024-11-17 08:25:16.407295] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:11.446 [2024-11-17 08:25:16.407325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.407336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:11.446 [2024-11-17 08:25:16.407346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.092 ms 00:25:11.446 [2024-11-17 08:25:16.407355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.430759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.430802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:11.446 [2024-11-17 08:25:16.430832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.356 ms 00:25:11.446 [2024-11-17 08:25:16.430841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.446 [2024-11-17 08:25:16.444302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.446 [2024-11-17 08:25:16.444357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:11.446 [2024-11-17 08:25:16.444386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.411 ms 00:25:11.446 [2024-11-17 08:25:16.444396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.459850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.459903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:11.706 [2024-11-17 08:25:16.459932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.411 ms 00:25:11.706 [2024-11-17 08:25:16.459941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.460905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.460936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:11.706 [2024-11-17 08:25:16.460964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:25:11.706 [2024-11-17 08:25:16.460978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.519801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.519863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:11.706 [2024-11-17 08:25:16.519902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.800 ms 00:25:11.706 [2024-11-17 08:25:16.519911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.529722] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:11.706 [2024-11-17 08:25:16.531684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.531732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:11.706 [2024-11-17 08:25:16.531746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.717 ms 00:25:11.706 [2024-11-17 08:25:16.531755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.531847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.531865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:11.706 [2024-11-17 08:25:16.531877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:11.706 [2024-11-17 08:25:16.531891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.532554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.532599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:11.706 [2024-11-17 08:25:16.532627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:25:11.706 [2024-11-17 08:25:16.532637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.532669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.532682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:11.706 [2024-11-17 08:25:16.532693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:11.706 [2024-11-17 08:25:16.532703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.532743] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:11.706 [2024-11-17 08:25:16.532761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.532771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:11.706 [2024-11-17 08:25:16.532781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:11.706 [2024-11-17 08:25:16.532790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.557065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.557125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:11.706 [2024-11-17 08:25:16.557141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.239 ms 00:25:11.706 [2024-11-17 08:25:16.557155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.557232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.706 [2024-11-17 08:25:16.557249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:11.706 [2024-11-17 08:25:16.557260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:11.706 [2024-11-17 08:25:16.557268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.706 [2024-11-17 08:25:16.558761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.823 ms, result 0 00:25:13.084  [2024-11-17T08:25:19.032Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-17T08:25:19.969Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-17T08:25:20.907Z] Copying: 68/1024 [MB] (23 MBps) [2024-11-17T08:25:21.845Z] Copying: 91/1024 [MB] (23 MBps) [2024-11-17T08:25:22.782Z] Copying: 114/1024 [MB] (22 MBps) [2024-11-17T08:25:24.161Z] Copying: 136/1024 [MB] (22 MBps) [2024-11-17T08:25:25.097Z] Copying: 159/1024 [MB] (22 MBps) [2024-11-17T08:25:26.035Z] Copying: 181/1024 [MB] (22 MBps) [2024-11-17T08:25:26.972Z] Copying: 204/1024 [MB] (22 MBps) [2024-11-17T08:25:27.910Z] Copying: 227/1024 [MB] (22 MBps) [2024-11-17T08:25:28.847Z] Copying: 249/1024 [MB] (22 MBps) [2024-11-17T08:25:29.784Z] Copying: 272/1024 [MB] (22 MBps) [2024-11-17T08:25:30.761Z] Copying: 294/1024 [MB] (22 MBps) [2024-11-17T08:25:32.140Z] Copying: 317/1024 [MB] (22 MBps) [2024-11-17T08:25:33.078Z] Copying: 339/1024 [MB] (22 MBps) [2024-11-17T08:25:34.015Z] Copying: 361/1024 [MB] (22 MBps) [2024-11-17T08:25:34.953Z] Copying: 384/1024 [MB] (22 MBps) [2024-11-17T08:25:35.888Z] Copying: 406/1024 [MB] (22 MBps) [2024-11-17T08:25:36.825Z] Copying: 429/1024 [MB] (22 MBps) [2024-11-17T08:25:37.763Z] Copying: 452/1024 [MB] (23 MBps) [2024-11-17T08:25:39.142Z] Copying: 476/1024 [MB] (23 MBps) [2024-11-17T08:25:40.078Z] Copying: 498/1024 [MB] (22 MBps) [2024-11-17T08:25:41.014Z] Copying: 522/1024 [MB] (23 MBps) [2024-11-17T08:25:41.952Z] Copying: 544/1024 [MB] (22 MBps) [2024-11-17T08:25:42.889Z] Copying: 567/1024 [MB] (22 MBps) [2024-11-17T08:25:43.826Z] Copying: 589/1024 [MB] (21 MBps) [2024-11-17T08:25:44.763Z] Copying: 611/1024 [MB] (22 MBps) [2024-11-17T08:25:46.141Z] Copying: 633/1024 [MB] (22 MBps) [2024-11-17T08:25:47.078Z] Copying: 656/1024 [MB] (22 MBps) [2024-11-17T08:25:48.014Z] Copying: 678/1024 [MB] (22 MBps) [2024-11-17T08:25:48.951Z] Copying: 701/1024 [MB] (22 MBps) [2024-11-17T08:25:49.888Z] Copying: 723/1024 [MB] (22 MBps) [2024-11-17T08:25:50.825Z] Copying: 746/1024 [MB] (22 MBps) [2024-11-17T08:25:51.762Z] Copying: 768/1024 [MB] (22 MBps) [2024-11-17T08:25:53.140Z] Copying: 791/1024 [MB] (22 MBps) [2024-11-17T08:25:54.076Z] Copying: 814/1024 [MB] (22 MBps) [2024-11-17T08:25:55.012Z] Copying: 836/1024 [MB] (22 MBps) [2024-11-17T08:25:55.950Z] Copying: 859/1024 [MB] (22 MBps) [2024-11-17T08:25:56.889Z] Copying: 882/1024 [MB] (22 MBps) [2024-11-17T08:25:57.825Z] Copying: 905/1024 [MB] (23 MBps) [2024-11-17T08:25:58.761Z] Copying: 927/1024 [MB] (22 MBps) [2024-11-17T08:26:00.137Z] Copying: 950/1024 [MB] (23 MBps) [2024-11-17T08:26:01.072Z] Copying: 973/1024 [MB] (22 MBps) [2024-11-17T08:26:02.042Z] Copying: 996/1024 [MB] (22 MBps) [2024-11-17T08:26:02.042Z] Copying: 1019/1024 [MB] (22 MBps) [2024-11-17T08:26:02.352Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-17 08:26:02.199214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.199302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:57.340 [2024-11-17 08:26:02.199320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.340 [2024-11-17 08:26:02.199330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.199360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.340 [2024-11-17 08:26:02.202619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.202665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:57.340 [2024-11-17 08:26:02.202700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:25:57.340 [2024-11-17 08:26:02.202710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.202964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.202983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:57.340 [2024-11-17 08:26:02.202994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:25:57.340 [2024-11-17 08:26:02.203004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.205959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.205983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:57.340 [2024-11-17 08:26:02.206010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:25:57.340 [2024-11-17 08:26:02.206019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.211408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.211440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:57.340 [2024-11-17 08:26:02.211469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.365 ms 00:25:57.340 [2024-11-17 08:26:02.211479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.236737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.236773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:57.340 [2024-11-17 08:26:02.236804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.187 ms 00:25:57.340 [2024-11-17 08:26:02.236814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.251612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.251668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:57.340 [2024-11-17 08:26:02.251719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.758 ms 00:25:57.340 [2024-11-17 08:26:02.251731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.253683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.253745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:57.340 [2024-11-17 08:26:02.253775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:25:57.340 [2024-11-17 08:26:02.253785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.278939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.278974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.340 [2024-11-17 08:26:02.279003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.118 ms 00:25:57.340 [2024-11-17 08:26:02.279012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.303219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.303265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.340 [2024-11-17 08:26:02.303294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.167 ms 00:25:57.340 [2024-11-17 08:26:02.303304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.340 [2024-11-17 08:26:02.328730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.340 [2024-11-17 08:26:02.328765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.340 [2024-11-17 08:26:02.328794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.387 ms 00:25:57.340 [2024-11-17 08:26:02.328803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.623 [2024-11-17 08:26:02.354554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.623 [2024-11-17 08:26:02.354591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.623 [2024-11-17 08:26:02.354605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.689 ms 00:25:57.623 [2024-11-17 08:26:02.354630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.623 [2024-11-17 08:26:02.354684] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.623 [2024-11-17 08:26:02.354705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:57.623 [2024-11-17 08:26:02.354724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:57.623 [2024-11-17 08:26:02.354735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.354995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.623 [2024-11-17 08:26:02.355209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.624 [2024-11-17 08:26:02.355871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.624 [2024-11-17 08:26:02.355885] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 851e0140-a546-4984-83c2-b80f6477b181 00:25:57.624 [2024-11-17 08:26:02.355895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:57.624 [2024-11-17 08:26:02.355904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:57.624 [2024-11-17 08:26:02.355912] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:57.624 [2024-11-17 08:26:02.355922] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:57.624 [2024-11-17 08:26:02.355931] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.624 [2024-11-17 08:26:02.355941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.624 [2024-11-17 08:26:02.355961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.624 [2024-11-17 08:26:02.355969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.624 [2024-11-17 08:26:02.355978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.624 [2024-11-17 08:26:02.355988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.624 [2024-11-17 08:26:02.355998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.624 [2024-11-17 08:26:02.356017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:25:57.624 [2024-11-17 08:26:02.356026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.624 [2024-11-17 08:26:02.369803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.624 [2024-11-17 08:26:02.369835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.624 [2024-11-17 08:26:02.369864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.735 ms 00:25:57.624 [2024-11-17 08:26:02.369873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.624 [2024-11-17 08:26:02.370300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.624 [2024-11-17 08:26:02.370324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.624 [2024-11-17 08:26:02.370344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:25:57.624 [2024-11-17 08:26:02.370353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.624 [2024-11-17 08:26:02.404629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.624 [2024-11-17 08:26:02.404666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.624 [2024-11-17 08:26:02.404695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.624 [2024-11-17 08:26:02.404705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.624 [2024-11-17 08:26:02.404754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.624 [2024-11-17 08:26:02.404769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.624 [2024-11-17 08:26:02.404786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.624 [2024-11-17 08:26:02.404795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.624 [2024-11-17 08:26:02.404887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.624 [2024-11-17 08:26:02.404905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.625 [2024-11-17 08:26:02.404931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.404956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.404976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.404988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.625 [2024-11-17 08:26:02.404999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.405014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.483944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.484000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.625 [2024-11-17 08:26:02.484031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.484040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.548806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.548853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.625 [2024-11-17 08:26:02.548884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.548901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.548966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.548981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.625 [2024-11-17 08:26:02.548992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.549001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.549090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.549162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.625 [2024-11-17 08:26:02.549189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.549199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.549316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.549334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.625 [2024-11-17 08:26:02.549345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.549355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.549399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.549416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.625 [2024-11-17 08:26:02.549427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.549437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.549484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.549498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.625 [2024-11-17 08:26:02.549509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.549533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.549577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.625 [2024-11-17 08:26:02.549592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.625 [2024-11-17 08:26:02.549602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.625 [2024-11-17 08:26:02.549611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.625 [2024-11-17 08:26:02.549738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.496 ms, result 0 00:25:58.562 00:25:58.562 00:25:58.562 08:26:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:00.467 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77740 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77740 ']' 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77740 00:26:00.467 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77740) - No such process 00:26:00.467 Process with pid 77740 is not found 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77740 is not found' 00:26:00.467 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:00.726 Remove shared memory files 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:00.726 00:26:00.726 real 3m58.977s 00:26:00.726 user 4m39.169s 00:26:00.726 sys 0m34.862s 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.726 08:26:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:00.726 ************************************ 00:26:00.726 END TEST ftl_dirty_shutdown 00:26:00.726 ************************************ 00:26:00.726 08:26:05 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:00.726 08:26:05 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:00.726 08:26:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.726 08:26:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:00.726 ************************************ 00:26:00.726 START TEST ftl_upgrade_shutdown 00:26:00.726 ************************************ 00:26:00.726 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:00.986 * Looking for test storage... 00:26:00.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:00.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.986 --rc genhtml_branch_coverage=1 00:26:00.986 --rc genhtml_function_coverage=1 00:26:00.986 --rc genhtml_legend=1 00:26:00.986 --rc geninfo_all_blocks=1 00:26:00.986 --rc geninfo_unexecuted_blocks=1 00:26:00.986 00:26:00.986 ' 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:00.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.986 --rc genhtml_branch_coverage=1 00:26:00.986 --rc genhtml_function_coverage=1 00:26:00.986 --rc genhtml_legend=1 00:26:00.986 --rc geninfo_all_blocks=1 00:26:00.986 --rc geninfo_unexecuted_blocks=1 00:26:00.986 00:26:00.986 ' 00:26:00.986 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:00.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.987 --rc genhtml_branch_coverage=1 00:26:00.987 --rc genhtml_function_coverage=1 00:26:00.987 --rc genhtml_legend=1 00:26:00.987 --rc geninfo_all_blocks=1 00:26:00.987 --rc geninfo_unexecuted_blocks=1 00:26:00.987 00:26:00.987 ' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:00.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.987 --rc genhtml_branch_coverage=1 00:26:00.987 --rc genhtml_function_coverage=1 00:26:00.987 --rc genhtml_legend=1 00:26:00.987 --rc geninfo_all_blocks=1 00:26:00.987 --rc geninfo_unexecuted_blocks=1 00:26:00.987 00:26:00.987 ' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80228 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80228 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80228 ']' 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.987 08:26:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:00.987 [2024-11-17 08:26:05.991995] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:00.987 [2024-11-17 08:26:05.992192] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80228 ] 00:26:01.246 [2024-11-17 08:26:06.179249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.505 [2024-11-17 08:26:06.287059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:02.074 08:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:02.642 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:02.643 { 00:26:02.643 "name": "basen1", 00:26:02.643 "aliases": [ 00:26:02.643 "993724bc-c560-4871-9adf-cd4e5a80b216" 00:26:02.643 ], 00:26:02.643 "product_name": "NVMe disk", 00:26:02.643 "block_size": 4096, 00:26:02.643 "num_blocks": 1310720, 00:26:02.643 "uuid": "993724bc-c560-4871-9adf-cd4e5a80b216", 00:26:02.643 "numa_id": -1, 00:26:02.643 "assigned_rate_limits": { 00:26:02.643 "rw_ios_per_sec": 0, 00:26:02.643 "rw_mbytes_per_sec": 0, 00:26:02.643 "r_mbytes_per_sec": 0, 00:26:02.643 "w_mbytes_per_sec": 0 00:26:02.643 }, 00:26:02.643 "claimed": true, 00:26:02.643 "claim_type": "read_many_write_one", 00:26:02.643 "zoned": false, 00:26:02.643 "supported_io_types": { 00:26:02.643 "read": true, 00:26:02.643 "write": true, 00:26:02.643 "unmap": true, 00:26:02.643 "flush": true, 00:26:02.643 "reset": true, 00:26:02.643 "nvme_admin": true, 00:26:02.643 "nvme_io": true, 00:26:02.643 "nvme_io_md": false, 00:26:02.643 "write_zeroes": true, 00:26:02.643 "zcopy": false, 00:26:02.643 "get_zone_info": false, 00:26:02.643 "zone_management": false, 00:26:02.643 "zone_append": false, 00:26:02.643 "compare": true, 00:26:02.643 "compare_and_write": false, 00:26:02.643 "abort": true, 00:26:02.643 "seek_hole": false, 00:26:02.643 "seek_data": false, 00:26:02.643 "copy": true, 00:26:02.643 "nvme_iov_md": false 00:26:02.643 }, 00:26:02.643 "driver_specific": { 00:26:02.643 "nvme": [ 00:26:02.643 { 00:26:02.643 "pci_address": "0000:00:11.0", 00:26:02.643 "trid": { 00:26:02.643 "trtype": "PCIe", 00:26:02.643 "traddr": "0000:00:11.0" 00:26:02.643 }, 00:26:02.643 "ctrlr_data": { 00:26:02.643 "cntlid": 0, 00:26:02.643 "vendor_id": "0x1b36", 00:26:02.643 "model_number": "QEMU NVMe Ctrl", 00:26:02.643 "serial_number": "12341", 00:26:02.643 "firmware_revision": "8.0.0", 00:26:02.643 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:02.643 "oacs": { 00:26:02.643 "security": 0, 00:26:02.643 "format": 1, 00:26:02.643 "firmware": 0, 00:26:02.643 "ns_manage": 1 00:26:02.643 }, 00:26:02.643 "multi_ctrlr": false, 00:26:02.643 "ana_reporting": false 00:26:02.643 }, 00:26:02.643 "vs": { 00:26:02.643 "nvme_version": "1.4" 00:26:02.643 }, 00:26:02.643 "ns_data": { 00:26:02.643 "id": 1, 00:26:02.643 "can_share": false 00:26:02.643 } 00:26:02.643 } 00:26:02.643 ], 00:26:02.643 "mp_policy": "active_passive" 00:26:02.643 } 00:26:02.643 } 00:26:02.643 ]' 00:26:02.643 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=d2209c53-2571-41e8-bf48-80c359063b45 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:02.902 08:26:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2209c53-2571-41e8-bf48-80c359063b45 00:26:03.471 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:03.471 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=eae717dc-7911-4172-8efa-c8f17872f5d0 00:26:03.471 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u eae717dc-7911-4172-8efa-c8f17872f5d0 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=43990545-8ce5-4da5-8a48-979d919d68df 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 43990545-8ce5-4da5-8a48-979d919d68df ]] 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 43990545-8ce5-4da5-8a48-979d919d68df 5120 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=43990545-8ce5-4da5-8a48-979d919d68df 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 43990545-8ce5-4da5-8a48-979d919d68df 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=43990545-8ce5-4da5-8a48-979d919d68df 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:03.730 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 43990545-8ce5-4da5-8a48-979d919d68df 00:26:03.990 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:03.990 { 00:26:03.990 "name": "43990545-8ce5-4da5-8a48-979d919d68df", 00:26:03.990 "aliases": [ 00:26:03.990 "lvs/basen1p0" 00:26:03.990 ], 00:26:03.990 "product_name": "Logical Volume", 00:26:03.990 "block_size": 4096, 00:26:03.990 "num_blocks": 5242880, 00:26:03.990 "uuid": "43990545-8ce5-4da5-8a48-979d919d68df", 00:26:03.990 "assigned_rate_limits": { 00:26:03.990 "rw_ios_per_sec": 0, 00:26:03.990 "rw_mbytes_per_sec": 0, 00:26:03.990 "r_mbytes_per_sec": 0, 00:26:03.990 "w_mbytes_per_sec": 0 00:26:03.990 }, 00:26:03.990 "claimed": false, 00:26:03.990 "zoned": false, 00:26:03.990 "supported_io_types": { 00:26:03.990 "read": true, 00:26:03.990 "write": true, 00:26:03.990 "unmap": true, 00:26:03.990 "flush": false, 00:26:03.990 "reset": true, 00:26:03.990 "nvme_admin": false, 00:26:03.990 "nvme_io": false, 00:26:03.990 "nvme_io_md": false, 00:26:03.990 "write_zeroes": true, 00:26:03.990 "zcopy": false, 00:26:03.990 "get_zone_info": false, 00:26:03.990 "zone_management": false, 00:26:03.990 "zone_append": false, 00:26:03.990 "compare": false, 00:26:03.990 "compare_and_write": false, 00:26:03.990 "abort": false, 00:26:03.990 "seek_hole": true, 00:26:03.990 "seek_data": true, 00:26:03.990 "copy": false, 00:26:03.990 "nvme_iov_md": false 00:26:03.990 }, 00:26:03.990 "driver_specific": { 00:26:03.990 "lvol": { 00:26:03.990 "lvol_store_uuid": "eae717dc-7911-4172-8efa-c8f17872f5d0", 00:26:03.990 "base_bdev": "basen1", 00:26:03.990 "thin_provision": true, 00:26:03.990 "num_allocated_clusters": 0, 00:26:03.990 "snapshot": false, 00:26:03.990 "clone": false, 00:26:03.990 "esnap_clone": false 00:26:03.990 } 00:26:03.990 } 00:26:03.990 } 00:26:03.990 ]' 00:26:03.990 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:03.990 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:03.990 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:03.990 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:26:03.990 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:26:03.990 08:26:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:26:03.990 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:04.250 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:04.250 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:04.508 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:04.508 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:04.508 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:04.768 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:04.768 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:04.768 08:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 43990545-8ce5-4da5-8a48-979d919d68df -c cachen1p0 --l2p_dram_limit 2 00:26:04.768 [2024-11-17 08:26:09.752139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.768 [2024-11-17 08:26:09.752213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:04.768 [2024-11-17 08:26:09.752235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:04.768 [2024-11-17 08:26:09.752247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.768 [2024-11-17 08:26:09.752316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.768 [2024-11-17 08:26:09.752344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:04.768 [2024-11-17 08:26:09.752357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:26:04.768 [2024-11-17 08:26:09.752368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.768 [2024-11-17 08:26:09.752397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:04.768 [2024-11-17 08:26:09.753231] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:04.768 [2024-11-17 08:26:09.753263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.768 [2024-11-17 08:26:09.753276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:04.768 [2024-11-17 08:26:09.753289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.871 ms 00:26:04.768 [2024-11-17 08:26:09.753301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.768 [2024-11-17 08:26:09.753434] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c55b0063-0f06-4dc7-9148-051e9c1135b4 00:26:04.768 [2024-11-17 08:26:09.754528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.768 [2024-11-17 08:26:09.754581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:04.768 [2024-11-17 08:26:09.754626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:04.768 [2024-11-17 08:26:09.754638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.768 [2024-11-17 08:26:09.759224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.768 [2024-11-17 08:26:09.759270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:04.768 [2024-11-17 08:26:09.759286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.527 ms 00:26:04.768 [2024-11-17 08:26:09.759298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.768 [2024-11-17 08:26:09.759354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.768 [2024-11-17 08:26:09.759373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:04.769 [2024-11-17 08:26:09.759389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:04.769 [2024-11-17 08:26:09.759405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.769 [2024-11-17 08:26:09.759516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.769 [2024-11-17 08:26:09.759539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:04.769 [2024-11-17 08:26:09.759554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:04.769 [2024-11-17 08:26:09.759567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.769 [2024-11-17 08:26:09.759597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:04.769 [2024-11-17 08:26:09.763573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.769 [2024-11-17 08:26:09.763605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:04.769 [2024-11-17 08:26:09.763624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.982 ms 00:26:04.769 [2024-11-17 08:26:09.763635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.769 [2024-11-17 08:26:09.763671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.769 [2024-11-17 08:26:09.763685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:04.769 [2024-11-17 08:26:09.763699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:04.769 [2024-11-17 08:26:09.763709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.769 [2024-11-17 08:26:09.763770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:04.769 [2024-11-17 08:26:09.763944] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:04.769 [2024-11-17 08:26:09.763966] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:04.769 [2024-11-17 08:26:09.763996] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:04.769 [2024-11-17 08:26:09.764029] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764042] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764057] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:04.769 [2024-11-17 08:26:09.764070] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:04.769 [2024-11-17 08:26:09.764083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:04.769 [2024-11-17 08:26:09.764094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:04.769 [2024-11-17 08:26:09.764124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.769 [2024-11-17 08:26:09.764135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:04.769 [2024-11-17 08:26:09.764150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.373 ms 00:26:04.769 [2024-11-17 08:26:09.764161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.769 [2024-11-17 08:26:09.764273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.769 [2024-11-17 08:26:09.764291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:04.769 [2024-11-17 08:26:09.764308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:26:04.769 [2024-11-17 08:26:09.764332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.769 [2024-11-17 08:26:09.764449] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:04.769 [2024-11-17 08:26:09.764472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:04.769 [2024-11-17 08:26:09.764503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:04.769 [2024-11-17 08:26:09.764583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:04.769 [2024-11-17 08:26:09.764622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:04.769 [2024-11-17 08:26:09.764633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:04.769 [2024-11-17 08:26:09.764644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:04.769 [2024-11-17 08:26:09.764681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:04.769 [2024-11-17 08:26:09.764694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:04.769 [2024-11-17 08:26:09.764716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:04.769 [2024-11-17 08:26:09.764726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:04.769 [2024-11-17 08:26:09.764751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:04.769 [2024-11-17 08:26:09.764764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:04.769 [2024-11-17 08:26:09.764787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:04.769 [2024-11-17 08:26:09.764797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:04.769 [2024-11-17 08:26:09.764819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:04.769 [2024-11-17 08:26:09.764831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:04.769 [2024-11-17 08:26:09.764853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:04.769 [2024-11-17 08:26:09.764864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:04.769 [2024-11-17 08:26:09.764885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:04.769 [2024-11-17 08:26:09.764897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:04.769 [2024-11-17 08:26:09.764921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:04.769 [2024-11-17 08:26:09.764946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:04.769 [2024-11-17 08:26:09.764967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:04.769 [2024-11-17 08:26:09.764978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.764989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:04.769 [2024-11-17 08:26:09.765001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:04.769 [2024-11-17 08:26:09.765011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.765022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:04.769 [2024-11-17 08:26:09.765033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:04.769 [2024-11-17 08:26:09.765074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.765083] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:04.769 [2024-11-17 08:26:09.765095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:04.769 [2024-11-17 08:26:09.765105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:04.769 [2024-11-17 08:26:09.765118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:04.769 [2024-11-17 08:26:09.765129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:04.769 [2024-11-17 08:26:09.765142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:04.769 [2024-11-17 08:26:09.765151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:04.769 [2024-11-17 08:26:09.765162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:04.769 [2024-11-17 08:26:09.765172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:04.769 [2024-11-17 08:26:09.765183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:04.769 [2024-11-17 08:26:09.765197] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:04.769 [2024-11-17 08:26:09.765228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.769 [2024-11-17 08:26:09.765241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:04.769 [2024-11-17 08:26:09.765254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:04.769 [2024-11-17 08:26:09.765264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:04.769 [2024-11-17 08:26:09.765276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:04.769 [2024-11-17 08:26:09.765286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:04.769 [2024-11-17 08:26:09.765298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:04.769 [2024-11-17 08:26:09.765307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:04.769 [2024-11-17 08:26:09.765319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:04.769 [2024-11-17 08:26:09.765330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:04.769 [2024-11-17 08:26:09.765343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:04.769 [2024-11-17 08:26:09.765354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:04.769 [2024-11-17 08:26:09.765365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:04.770 [2024-11-17 08:26:09.765376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:04.770 [2024-11-17 08:26:09.765389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:04.770 [2024-11-17 08:26:09.765400] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:04.770 [2024-11-17 08:26:09.765413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.770 [2024-11-17 08:26:09.765424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:04.770 [2024-11-17 08:26:09.765452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:04.770 [2024-11-17 08:26:09.765463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:04.770 [2024-11-17 08:26:09.765475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:04.770 [2024-11-17 08:26:09.765487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.770 [2024-11-17 08:26:09.765499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:04.770 [2024-11-17 08:26:09.765510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.109 ms 00:26:04.770 [2024-11-17 08:26:09.765522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.770 [2024-11-17 08:26:09.765580] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:04.770 [2024-11-17 08:26:09.765602] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:07.302 [2024-11-17 08:26:12.121244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.302 [2024-11-17 08:26:12.121334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:07.303 [2024-11-17 08:26:12.121354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2355.678 ms 00:26:07.303 [2024-11-17 08:26:12.121367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.147738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.147790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:07.303 [2024-11-17 08:26:12.147809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.142 ms 00:26:07.303 [2024-11-17 08:26:12.147823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.147964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.147984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:07.303 [2024-11-17 08:26:12.147996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:07.303 [2024-11-17 08:26:12.148045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.181226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.181287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:07.303 [2024-11-17 08:26:12.181303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.097 ms 00:26:07.303 [2024-11-17 08:26:12.181316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.181374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.181391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:07.303 [2024-11-17 08:26:12.181402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:07.303 [2024-11-17 08:26:12.181428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.181767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.181789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:07.303 [2024-11-17 08:26:12.181801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:26:07.303 [2024-11-17 08:26:12.181813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.181866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.181883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:07.303 [2024-11-17 08:26:12.181910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:07.303 [2024-11-17 08:26:12.181924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.196254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.196292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:07.303 [2024-11-17 08:26:12.196307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.307 ms 00:26:07.303 [2024-11-17 08:26:12.196319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.206732] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:07.303 [2024-11-17 08:26:12.207547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.207577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:07.303 [2024-11-17 08:26:12.207596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.133 ms 00:26:07.303 [2024-11-17 08:26:12.207622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.238071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.238114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:07.303 [2024-11-17 08:26:12.238132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.415 ms 00:26:07.303 [2024-11-17 08:26:12.238142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.238238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.238257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:07.303 [2024-11-17 08:26:12.238273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:07.303 [2024-11-17 08:26:12.238283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.262301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.262350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:07.303 [2024-11-17 08:26:12.262368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.959 ms 00:26:07.303 [2024-11-17 08:26:12.262379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.286178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.286223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:07.303 [2024-11-17 08:26:12.286240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.750 ms 00:26:07.303 [2024-11-17 08:26:12.286250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.303 [2024-11-17 08:26:12.286833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.303 [2024-11-17 08:26:12.286855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:07.303 [2024-11-17 08:26:12.286869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:26:07.303 [2024-11-17 08:26:12.286896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.562 [2024-11-17 08:26:12.358705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.562 [2024-11-17 08:26:12.358745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:07.562 [2024-11-17 08:26:12.358766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 71.731 ms 00:26:07.562 [2024-11-17 08:26:12.358777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.562 [2024-11-17 08:26:12.383823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.562 [2024-11-17 08:26:12.383856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:07.562 [2024-11-17 08:26:12.383883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.959 ms 00:26:07.562 [2024-11-17 08:26:12.383894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.562 [2024-11-17 08:26:12.408481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.562 [2024-11-17 08:26:12.408512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:07.562 [2024-11-17 08:26:12.408528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.541 ms 00:26:07.562 [2024-11-17 08:26:12.408538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.562 [2024-11-17 08:26:12.433031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.562 [2024-11-17 08:26:12.433079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:07.562 [2024-11-17 08:26:12.433105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.449 ms 00:26:07.562 [2024-11-17 08:26:12.433118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.562 [2024-11-17 08:26:12.433169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.562 [2024-11-17 08:26:12.433322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:07.562 [2024-11-17 08:26:12.433354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:07.562 [2024-11-17 08:26:12.433381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.562 [2024-11-17 08:26:12.433493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.562 [2024-11-17 08:26:12.433527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:07.562 [2024-11-17 08:26:12.433541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:07.562 [2024-11-17 08:26:12.433551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.562 [2024-11-17 08:26:12.434690] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2682.096 ms, result 0 00:26:07.562 { 00:26:07.562 "name": "ftl", 00:26:07.562 "uuid": "c55b0063-0f06-4dc7-9148-051e9c1135b4" 00:26:07.562 } 00:26:07.562 08:26:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:07.821 [2024-11-17 08:26:12.709589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.821 08:26:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:08.080 08:26:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:08.339 [2024-11-17 08:26:13.194121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:08.339 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:08.598 [2024-11-17 08:26:13.475346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:08.598 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:08.857 Fill FTL, iteration 1 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80348 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80348 /var/tmp/spdk.tgt.sock 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80348 ']' 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.857 08:26:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:09.117 [2024-11-17 08:26:13.929163] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:09.117 [2024-11-17 08:26:13.929300] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80348 ] 00:26:09.117 [2024-11-17 08:26:14.102199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.376 [2024-11-17 08:26:14.224949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.944 08:26:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.944 08:26:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:09.944 08:26:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:10.513 ftln1 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80348 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80348 ']' 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80348 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80348 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.513 killing process with pid 80348 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80348' 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80348 00:26:10.513 08:26:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80348 00:26:12.419 08:26:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:12.419 08:26:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:12.419 [2024-11-17 08:26:17.247695] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:12.419 [2024-11-17 08:26:17.248123] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80394 ] 00:26:12.419 [2024-11-17 08:26:17.426002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.677 [2024-11-17 08:26:17.508213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.056  [2024-11-17T08:26:20.006Z] Copying: 218/1024 [MB] (218 MBps) [2024-11-17T08:26:20.944Z] Copying: 441/1024 [MB] (223 MBps) [2024-11-17T08:26:21.882Z] Copying: 664/1024 [MB] (223 MBps) [2024-11-17T08:26:22.820Z] Copying: 885/1024 [MB] (221 MBps) [2024-11-17T08:26:23.388Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:26:18.376 00:26:18.376 Calculate MD5 checksum, iteration 1 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:18.376 08:26:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:18.635 [2024-11-17 08:26:23.413519] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:18.635 [2024-11-17 08:26:23.413948] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80454 ] 00:26:18.635 [2024-11-17 08:26:23.589231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.895 [2024-11-17 08:26:23.676254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.275  [2024-11-17T08:26:26.224Z] Copying: 471/1024 [MB] (471 MBps) [2024-11-17T08:26:26.224Z] Copying: 946/1024 [MB] (475 MBps) [2024-11-17T08:26:27.161Z] Copying: 1024/1024 [MB] (average 472 MBps) 00:26:22.149 00:26:22.149 08:26:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:22.149 08:26:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:24.053 Fill FTL, iteration 2 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=27755c57df2df43be5d890199af449c5 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:24.053 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:24.054 08:26:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:24.054 [2024-11-17 08:26:28.795031] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:24.054 [2024-11-17 08:26:28.795229] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80515 ] 00:26:24.054 [2024-11-17 08:26:28.982630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.322 [2024-11-17 08:26:29.106270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.701  [2024-11-17T08:26:31.651Z] Copying: 215/1024 [MB] (215 MBps) [2024-11-17T08:26:32.594Z] Copying: 432/1024 [MB] (217 MBps) [2024-11-17T08:26:33.593Z] Copying: 647/1024 [MB] (215 MBps) [2024-11-17T08:26:34.531Z] Copying: 865/1024 [MB] (218 MBps) [2024-11-17T08:26:35.100Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:26:30.088 00:26:30.088 Calculate MD5 checksum, iteration 2 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:30.088 08:26:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:30.347 [2024-11-17 08:26:35.118706] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:30.347 [2024-11-17 08:26:35.118873] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80575 ] 00:26:30.347 [2024-11-17 08:26:35.295120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.606 [2024-11-17 08:26:35.387683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.985  [2024-11-17T08:26:38.375Z] Copying: 456/1024 [MB] (456 MBps) [2024-11-17T08:26:38.375Z] Copying: 912/1024 [MB] (456 MBps) [2024-11-17T08:26:39.312Z] Copying: 1024/1024 [MB] (average 456 MBps) 00:26:34.300 00:26:34.300 08:26:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:34.300 08:26:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:36.207 08:26:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:36.207 08:26:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f1f0952778f016b62af8fdeeb0a2032e 00:26:36.207 08:26:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:36.207 08:26:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:36.207 08:26:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:36.207 [2024-11-17 08:26:41.151173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.207 [2024-11-17 08:26:41.151223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:36.207 [2024-11-17 08:26:41.151261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:36.207 [2024-11-17 08:26:41.151272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.207 [2024-11-17 08:26:41.151328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.207 [2024-11-17 08:26:41.151344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:36.207 [2024-11-17 08:26:41.151361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:36.207 [2024-11-17 08:26:41.151371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.207 [2024-11-17 08:26:41.151396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.207 [2024-11-17 08:26:41.151408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:36.207 [2024-11-17 08:26:41.151418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:36.207 [2024-11-17 08:26:41.151428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.207 [2024-11-17 08:26:41.151556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.357 ms, result 0 00:26:36.207 true 00:26:36.207 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:36.467 { 00:26:36.467 "name": "ftl", 00:26:36.467 "properties": [ 00:26:36.467 { 00:26:36.467 "name": "superblock_version", 00:26:36.467 "value": 5, 00:26:36.467 "read-only": true 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "name": "base_device", 00:26:36.467 "bands": [ 00:26:36.467 { 00:26:36.467 "id": 0, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 1, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 2, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 3, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 4, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 5, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 6, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 7, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 8, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 9, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 10, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 11, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 12, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 13, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 14, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 15, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 16, 00:26:36.467 "state": "FREE", 00:26:36.467 "validity": 0.0 00:26:36.467 }, 00:26:36.467 { 00:26:36.467 "id": 17, 00:26:36.467 "state": "FREE", 00:26:36.468 "validity": 0.0 00:26:36.468 } 00:26:36.468 ], 00:26:36.468 "read-only": true 00:26:36.468 }, 00:26:36.468 { 00:26:36.468 "name": "cache_device", 00:26:36.468 "type": "bdev", 00:26:36.468 "chunks": [ 00:26:36.468 { 00:26:36.468 "id": 0, 00:26:36.468 "state": "INACTIVE", 00:26:36.468 "utilization": 0.0 00:26:36.468 }, 00:26:36.468 { 00:26:36.468 "id": 1, 00:26:36.468 "state": "CLOSED", 00:26:36.468 "utilization": 1.0 00:26:36.468 }, 00:26:36.468 { 00:26:36.468 "id": 2, 00:26:36.468 "state": "CLOSED", 00:26:36.468 "utilization": 1.0 00:26:36.468 }, 00:26:36.468 { 00:26:36.468 "id": 3, 00:26:36.468 "state": "OPEN", 00:26:36.468 "utilization": 0.001953125 00:26:36.468 }, 00:26:36.468 { 00:26:36.468 "id": 4, 00:26:36.468 "state": "OPEN", 00:26:36.468 "utilization": 0.0 00:26:36.468 } 00:26:36.468 ], 00:26:36.468 "read-only": true 00:26:36.468 }, 00:26:36.468 { 00:26:36.468 "name": "verbose_mode", 00:26:36.468 "value": true, 00:26:36.468 "unit": "", 00:26:36.468 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:36.468 }, 00:26:36.468 { 00:26:36.468 "name": "prep_upgrade_on_shutdown", 00:26:36.468 "value": false, 00:26:36.468 "unit": "", 00:26:36.468 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:36.468 } 00:26:36.468 ] 00:26:36.468 } 00:26:36.468 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:36.728 [2024-11-17 08:26:41.595565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.728 [2024-11-17 08:26:41.595623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:36.728 [2024-11-17 08:26:41.595654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:36.728 [2024-11-17 08:26:41.595663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.728 [2024-11-17 08:26:41.595693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.728 [2024-11-17 08:26:41.595722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:36.728 [2024-11-17 08:26:41.595732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:36.728 [2024-11-17 08:26:41.595741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.728 [2024-11-17 08:26:41.595764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:36.728 [2024-11-17 08:26:41.595775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:36.728 [2024-11-17 08:26:41.595784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:36.728 [2024-11-17 08:26:41.595793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:36.728 [2024-11-17 08:26:41.595853] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.274 ms, result 0 00:26:36.728 true 00:26:36.728 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:36.728 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:36.728 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:36.988 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:36.988 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:36.988 08:26:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:37.247 [2024-11-17 08:26:42.088158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.247 [2024-11-17 08:26:42.088203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:37.247 [2024-11-17 08:26:42.088234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:37.247 [2024-11-17 08:26:42.088243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.247 [2024-11-17 08:26:42.088272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.247 [2024-11-17 08:26:42.088285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:37.247 [2024-11-17 08:26:42.088295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:37.247 [2024-11-17 08:26:42.088304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.247 [2024-11-17 08:26:42.088326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:37.247 [2024-11-17 08:26:42.088338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:37.247 [2024-11-17 08:26:42.088348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:37.247 [2024-11-17 08:26:42.088356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:37.247 [2024-11-17 08:26:42.088419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.250 ms, result 0 00:26:37.247 true 00:26:37.247 08:26:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:37.507 { 00:26:37.507 "name": "ftl", 00:26:37.507 "properties": [ 00:26:37.507 { 00:26:37.507 "name": "superblock_version", 00:26:37.507 "value": 5, 00:26:37.507 "read-only": true 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "name": "base_device", 00:26:37.507 "bands": [ 00:26:37.507 { 00:26:37.507 "id": 0, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 1, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 2, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 3, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 4, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 5, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 6, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 7, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 8, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 9, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 10, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 11, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 12, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 13, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 14, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 15, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 16, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 17, 00:26:37.507 "state": "FREE", 00:26:37.507 "validity": 0.0 00:26:37.507 } 00:26:37.507 ], 00:26:37.507 "read-only": true 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "name": "cache_device", 00:26:37.507 "type": "bdev", 00:26:37.507 "chunks": [ 00:26:37.507 { 00:26:37.507 "id": 0, 00:26:37.507 "state": "INACTIVE", 00:26:37.507 "utilization": 0.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 1, 00:26:37.507 "state": "CLOSED", 00:26:37.507 "utilization": 1.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 2, 00:26:37.507 "state": "CLOSED", 00:26:37.507 "utilization": 1.0 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 3, 00:26:37.507 "state": "OPEN", 00:26:37.507 "utilization": 0.001953125 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "id": 4, 00:26:37.507 "state": "OPEN", 00:26:37.507 "utilization": 0.0 00:26:37.507 } 00:26:37.507 ], 00:26:37.507 "read-only": true 00:26:37.507 }, 00:26:37.507 { 00:26:37.507 "name": "verbose_mode", 00:26:37.507 "value": true, 00:26:37.507 "unit": "", 00:26:37.508 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:37.508 }, 00:26:37.508 { 00:26:37.508 "name": "prep_upgrade_on_shutdown", 00:26:37.508 "value": true, 00:26:37.508 "unit": "", 00:26:37.508 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:37.508 } 00:26:37.508 ] 00:26:37.508 } 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80228 ]] 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80228 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80228 ']' 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80228 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80228 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80228' 00:26:37.508 killing process with pid 80228 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80228 00:26:37.508 08:26:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80228 00:26:38.077 [2024-11-17 08:26:43.084559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:38.336 [2024-11-17 08:26:43.100514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.336 [2024-11-17 08:26:43.100570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:38.336 [2024-11-17 08:26:43.100602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:38.336 [2024-11-17 08:26:43.100612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.336 [2024-11-17 08:26:43.100638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:38.336 [2024-11-17 08:26:43.103350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.336 [2024-11-17 08:26:43.103394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:38.336 [2024-11-17 08:26:43.103422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.695 ms 00:26:38.336 [2024-11-17 08:26:43.103431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.465 [2024-11-17 08:26:51.048353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.465 [2024-11-17 08:26:51.048422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:46.465 [2024-11-17 08:26:51.048455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7944.900 ms 00:26:46.465 [2024-11-17 08:26:51.048487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.465 [2024-11-17 08:26:51.049678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.465 [2024-11-17 08:26:51.049728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:46.465 [2024-11-17 08:26:51.049743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.170 ms 00:26:46.465 [2024-11-17 08:26:51.049754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.465 [2024-11-17 08:26:51.050863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.465 [2024-11-17 08:26:51.050907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:46.465 [2024-11-17 08:26:51.050936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:26:46.465 [2024-11-17 08:26:51.050946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.465 [2024-11-17 08:26:51.061340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.465 [2024-11-17 08:26:51.061390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:46.465 [2024-11-17 08:26:51.061419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.352 ms 00:26:46.465 [2024-11-17 08:26:51.061430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.465 [2024-11-17 08:26:51.068032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.068068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:46.466 [2024-11-17 08:26:51.068105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.565 ms 00:26:46.466 [2024-11-17 08:26:51.068117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.068187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.068202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:46.466 [2024-11-17 08:26:51.068218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:26:46.466 [2024-11-17 08:26:51.068228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.078278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.078309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:46.466 [2024-11-17 08:26:51.078337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.032 ms 00:26:46.466 [2024-11-17 08:26:51.078346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.088453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.088483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:46.466 [2024-11-17 08:26:51.088510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.071 ms 00:26:46.466 [2024-11-17 08:26:51.088519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.098364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.098395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:46.466 [2024-11-17 08:26:51.098422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.807 ms 00:26:46.466 [2024-11-17 08:26:51.098431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.108192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.108223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:46.466 [2024-11-17 08:26:51.108251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.699 ms 00:26:46.466 [2024-11-17 08:26:51.108260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.108294] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:46.466 [2024-11-17 08:26:51.108314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:46.466 [2024-11-17 08:26:51.108336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:46.466 [2024-11-17 08:26:51.108360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:46.466 [2024-11-17 08:26:51.108371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:46.466 [2024-11-17 08:26:51.108548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:46.466 [2024-11-17 08:26:51.108558] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c55b0063-0f06-4dc7-9148-051e9c1135b4 00:26:46.466 [2024-11-17 08:26:51.108568] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:46.466 [2024-11-17 08:26:51.108577] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:46.466 [2024-11-17 08:26:51.108586] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:46.466 [2024-11-17 08:26:51.108597] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:46.466 [2024-11-17 08:26:51.108607] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:46.466 [2024-11-17 08:26:51.108621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:46.466 [2024-11-17 08:26:51.108630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:46.466 [2024-11-17 08:26:51.108639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:46.466 [2024-11-17 08:26:51.108648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:46.466 [2024-11-17 08:26:51.108657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.108678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:46.466 [2024-11-17 08:26:51.108688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:26:46.466 [2024-11-17 08:26:51.108698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.121749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.121781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:46.466 [2024-11-17 08:26:51.121811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.030 ms 00:26:46.466 [2024-11-17 08:26:51.121827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.122250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.466 [2024-11-17 08:26:51.122267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:46.466 [2024-11-17 08:26:51.122279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:26:46.466 [2024-11-17 08:26:51.122289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.169420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.466 [2024-11-17 08:26:51.169464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:46.466 [2024-11-17 08:26:51.169479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.466 [2024-11-17 08:26:51.169494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.466 [2024-11-17 08:26:51.169529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.466 [2024-11-17 08:26:51.169541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:46.466 [2024-11-17 08:26:51.169550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.466 [2024-11-17 08:26:51.169560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.169654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.169670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:46.467 [2024-11-17 08:26:51.169680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.169690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.169748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.169760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:46.467 [2024-11-17 08:26:51.169770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.169779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.247973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.248028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:46.467 [2024-11-17 08:26:51.248059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.248075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.312337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.312381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:46.467 [2024-11-17 08:26:51.312412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.312423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.312505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.312522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:46.467 [2024-11-17 08:26:51.312532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.312542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.312631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.312663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:46.467 [2024-11-17 08:26:51.312689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.312699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.312807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.312826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:46.467 [2024-11-17 08:26:51.312838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.312848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.312892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.312915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:46.467 [2024-11-17 08:26:51.312926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.312936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.312975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.312989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:46.467 [2024-11-17 08:26:51.313000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.313009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.313060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:46.467 [2024-11-17 08:26:51.313104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:46.467 [2024-11-17 08:26:51.313114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:46.467 [2024-11-17 08:26:51.313124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.467 [2024-11-17 08:26:51.313304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8212.803 ms, result 0 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80770 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80770 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80770 ']' 00:26:49.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.759 08:26:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:49.759 [2024-11-17 08:26:54.673312] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:49.759 [2024-11-17 08:26:54.673478] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80770 ] 00:26:50.018 [2024-11-17 08:26:54.851458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.019 [2024-11-17 08:26:54.931369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.957 [2024-11-17 08:26:55.640868] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:50.957 [2024-11-17 08:26:55.640948] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:50.957 [2024-11-17 08:26:55.786226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.786267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:50.957 [2024-11-17 08:26:55.786300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:50.957 [2024-11-17 08:26:55.786309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.786368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.786384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:50.957 [2024-11-17 08:26:55.786394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:50.957 [2024-11-17 08:26:55.786403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.786438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:50.957 [2024-11-17 08:26:55.787288] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:50.957 [2024-11-17 08:26:55.787314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.787325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:50.957 [2024-11-17 08:26:55.787336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.889 ms 00:26:50.957 [2024-11-17 08:26:55.787345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.788506] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:50.957 [2024-11-17 08:26:55.801471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.801507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:50.957 [2024-11-17 08:26:55.801543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.968 ms 00:26:50.957 [2024-11-17 08:26:55.801553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.801614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.801631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:50.957 [2024-11-17 08:26:55.801642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:50.957 [2024-11-17 08:26:55.801651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.805658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.805694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:50.957 [2024-11-17 08:26:55.805723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.928 ms 00:26:50.957 [2024-11-17 08:26:55.805733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.805798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.805815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:50.957 [2024-11-17 08:26:55.805825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:50.957 [2024-11-17 08:26:55.805834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.805896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.805912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:50.957 [2024-11-17 08:26:55.805928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:50.957 [2024-11-17 08:26:55.805953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.806004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:50.957 [2024-11-17 08:26:55.809603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.809647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:50.957 [2024-11-17 08:26:55.809676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.607 ms 00:26:50.957 [2024-11-17 08:26:55.809693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.809749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.809769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:50.957 [2024-11-17 08:26:55.809780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:50.957 [2024-11-17 08:26:55.809789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.809819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:50.957 [2024-11-17 08:26:55.809844] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:50.957 [2024-11-17 08:26:55.809882] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:50.957 [2024-11-17 08:26:55.809913] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:50.957 [2024-11-17 08:26:55.810025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:50.957 [2024-11-17 08:26:55.810039] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:50.957 [2024-11-17 08:26:55.810052] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:50.957 [2024-11-17 08:26:55.810064] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:50.957 [2024-11-17 08:26:55.810075] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:50.957 [2024-11-17 08:26:55.810090] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:50.957 [2024-11-17 08:26:55.810100] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:50.957 [2024-11-17 08:26:55.810123] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:50.957 [2024-11-17 08:26:55.810134] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:50.957 [2024-11-17 08:26:55.810145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.810155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:50.957 [2024-11-17 08:26:55.810165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.328 ms 00:26:50.957 [2024-11-17 08:26:55.810175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.810273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.957 [2024-11-17 08:26:55.810288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:50.957 [2024-11-17 08:26:55.810313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:26:50.957 [2024-11-17 08:26:55.810327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.957 [2024-11-17 08:26:55.810445] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:50.958 [2024-11-17 08:26:55.810460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:50.958 [2024-11-17 08:26:55.810471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:50.958 [2024-11-17 08:26:55.810501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:50.958 [2024-11-17 08:26:55.810519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:50.958 [2024-11-17 08:26:55.810528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:50.958 [2024-11-17 08:26:55.810537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:50.958 [2024-11-17 08:26:55.810556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:50.958 [2024-11-17 08:26:55.810565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:50.958 [2024-11-17 08:26:55.810584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:50.958 [2024-11-17 08:26:55.810593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:50.958 [2024-11-17 08:26:55.810611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:50.958 [2024-11-17 08:26:55.810619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:50.958 [2024-11-17 08:26:55.810638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:50.958 [2024-11-17 08:26:55.810647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:50.958 [2024-11-17 08:26:55.810665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:50.958 [2024-11-17 08:26:55.810674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:50.958 [2024-11-17 08:26:55.810705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:50.958 [2024-11-17 08:26:55.810714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:50.958 [2024-11-17 08:26:55.810732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:50.958 [2024-11-17 08:26:55.810741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:50.958 [2024-11-17 08:26:55.810759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:50.958 [2024-11-17 08:26:55.810768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:50.958 [2024-11-17 08:26:55.810786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:50.958 [2024-11-17 08:26:55.810813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:50.958 [2024-11-17 08:26:55.810839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:50.958 [2024-11-17 08:26:55.810848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810857] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:50.958 [2024-11-17 08:26:55.810867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:50.958 [2024-11-17 08:26:55.810877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:50.958 [2024-11-17 08:26:55.810903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:50.958 [2024-11-17 08:26:55.810912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:50.958 [2024-11-17 08:26:55.810921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:50.958 [2024-11-17 08:26:55.810930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:50.958 [2024-11-17 08:26:55.810940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:50.958 [2024-11-17 08:26:55.810949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:50.958 [2024-11-17 08:26:55.810959] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:50.958 [2024-11-17 08:26:55.810971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.810982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:50.958 [2024-11-17 08:26:55.810992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:50.958 [2024-11-17 08:26:55.811021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:50.958 [2024-11-17 08:26:55.811031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:50.958 [2024-11-17 08:26:55.811041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:50.958 [2024-11-17 08:26:55.811051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:50.958 [2024-11-17 08:26:55.811135] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:50.958 [2024-11-17 08:26:55.811146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:50.958 [2024-11-17 08:26:55.811166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:50.958 [2024-11-17 08:26:55.811176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:50.958 [2024-11-17 08:26:55.811186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:50.958 [2024-11-17 08:26:55.811196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.958 [2024-11-17 08:26:55.811206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:50.958 [2024-11-17 08:26:55.811218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.827 ms 00:26:50.958 [2024-11-17 08:26:55.811231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.958 [2024-11-17 08:26:55.811289] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:50.958 [2024-11-17 08:26:55.811306] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:53.493 [2024-11-17 08:26:58.153877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.153941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:53.493 [2024-11-17 08:26:58.153969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2342.607 ms 00:26:53.493 [2024-11-17 08:26:58.153985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.179452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.179504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:53.493 [2024-11-17 08:26:58.179529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.188 ms 00:26:53.493 [2024-11-17 08:26:58.179545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.179697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.179733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:53.493 [2024-11-17 08:26:58.179752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:53.493 [2024-11-17 08:26:58.179768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.211472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.211517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:53.493 [2024-11-17 08:26:58.211540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.564 ms 00:26:53.493 [2024-11-17 08:26:58.211565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.211627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.211649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:53.493 [2024-11-17 08:26:58.211667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:53.493 [2024-11-17 08:26:58.211683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.212179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.212211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:53.493 [2024-11-17 08:26:58.212249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:26:53.493 [2024-11-17 08:26:58.212269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.212359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.212384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:53.493 [2024-11-17 08:26:58.212403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:53.493 [2024-11-17 08:26:58.212421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.226991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.227031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:53.493 [2024-11-17 08:26:58.227054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.530 ms 00:26:53.493 [2024-11-17 08:26:58.227073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.240093] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:53.493 [2024-11-17 08:26:58.240135] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:53.493 [2024-11-17 08:26:58.240159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.240177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:53.493 [2024-11-17 08:26:58.240195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.904 ms 00:26:53.493 [2024-11-17 08:26:58.240212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.254306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.254346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:53.493 [2024-11-17 08:26:58.254370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.055 ms 00:26:53.493 [2024-11-17 08:26:58.254389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.493 [2024-11-17 08:26:58.266535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.493 [2024-11-17 08:26:58.266575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:53.494 [2024-11-17 08:26:58.266597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.086 ms 00:26:53.494 [2024-11-17 08:26:58.266614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.278740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.278780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:53.494 [2024-11-17 08:26:58.278802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.089 ms 00:26:53.494 [2024-11-17 08:26:58.278819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.279625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.279665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:53.494 [2024-11-17 08:26:58.279687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.680 ms 00:26:53.494 [2024-11-17 08:26:58.279705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.352218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.352291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:53.494 [2024-11-17 08:26:58.352320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.475 ms 00:26:53.494 [2024-11-17 08:26:58.352336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.362506] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:53.494 [2024-11-17 08:26:58.363204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.363237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:53.494 [2024-11-17 08:26:58.363260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.782 ms 00:26:53.494 [2024-11-17 08:26:58.363278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.363419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.363478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:53.494 [2024-11-17 08:26:58.363500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:53.494 [2024-11-17 08:26:58.363518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.363647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.363674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:53.494 [2024-11-17 08:26:58.363695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:53.494 [2024-11-17 08:26:58.363712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.363765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.363802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:53.494 [2024-11-17 08:26:58.363821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:53.494 [2024-11-17 08:26:58.363846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.363937] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:53.494 [2024-11-17 08:26:58.363980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.363998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:53.494 [2024-11-17 08:26:58.364015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:26:53.494 [2024-11-17 08:26:58.364031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.388266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.388313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:53.494 [2024-11-17 08:26:58.388337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.190 ms 00:26:53.494 [2024-11-17 08:26:58.388356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.388456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:53.494 [2024-11-17 08:26:58.388482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:53.494 [2024-11-17 08:26:58.388502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:26:53.494 [2024-11-17 08:26:58.388519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:53.494 [2024-11-17 08:26:58.390029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2603.280 ms, result 0 00:26:53.494 [2024-11-17 08:26:58.404664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.494 [2024-11-17 08:26:58.420665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:53.494 [2024-11-17 08:26:58.428794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:54.430 08:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.430 08:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:54.430 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:54.430 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:54.430 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:54.430 [2024-11-17 08:26:59.413639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.430 [2024-11-17 08:26:59.413705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:54.430 [2024-11-17 08:26:59.413733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:54.430 [2024-11-17 08:26:59.413758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.430 [2024-11-17 08:26:59.413803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.430 [2024-11-17 08:26:59.413825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:54.430 [2024-11-17 08:26:59.413841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:54.430 [2024-11-17 08:26:59.413856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.430 [2024-11-17 08:26:59.413896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:54.430 [2024-11-17 08:26:59.413916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:54.430 [2024-11-17 08:26:59.413933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:54.430 [2024-11-17 08:26:59.413949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:54.430 [2024-11-17 08:26:59.414063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.397 ms, result 0 00:26:54.430 true 00:26:54.431 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:54.689 { 00:26:54.689 "name": "ftl", 00:26:54.689 "properties": [ 00:26:54.689 { 00:26:54.689 "name": "superblock_version", 00:26:54.689 "value": 5, 00:26:54.689 "read-only": true 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "name": "base_device", 00:26:54.689 "bands": [ 00:26:54.689 { 00:26:54.689 "id": 0, 00:26:54.689 "state": "CLOSED", 00:26:54.689 "validity": 1.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 1, 00:26:54.689 "state": "CLOSED", 00:26:54.689 "validity": 1.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 2, 00:26:54.689 "state": "CLOSED", 00:26:54.689 "validity": 0.007843137254901933 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 3, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 4, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 5, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 6, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 7, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 8, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 9, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 10, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 11, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 12, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 13, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 14, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 15, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 16, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "id": 17, 00:26:54.689 "state": "FREE", 00:26:54.689 "validity": 0.0 00:26:54.689 } 00:26:54.689 ], 00:26:54.689 "read-only": true 00:26:54.689 }, 00:26:54.689 { 00:26:54.689 "name": "cache_device", 00:26:54.689 "type": "bdev", 00:26:54.689 "chunks": [ 00:26:54.689 { 00:26:54.689 "id": 0, 00:26:54.689 "state": "INACTIVE", 00:26:54.689 "utilization": 0.0 00:26:54.690 }, 00:26:54.690 { 00:26:54.690 "id": 1, 00:26:54.690 "state": "OPEN", 00:26:54.690 "utilization": 0.0 00:26:54.690 }, 00:26:54.690 { 00:26:54.690 "id": 2, 00:26:54.690 "state": "OPEN", 00:26:54.690 "utilization": 0.0 00:26:54.690 }, 00:26:54.690 { 00:26:54.690 "id": 3, 00:26:54.690 "state": "FREE", 00:26:54.690 "utilization": 0.0 00:26:54.690 }, 00:26:54.690 { 00:26:54.690 "id": 4, 00:26:54.690 "state": "FREE", 00:26:54.690 "utilization": 0.0 00:26:54.690 } 00:26:54.690 ], 00:26:54.690 "read-only": true 00:26:54.690 }, 00:26:54.690 { 00:26:54.690 "name": "verbose_mode", 00:26:54.690 "value": true, 00:26:54.690 "unit": "", 00:26:54.690 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:54.690 }, 00:26:54.690 { 00:26:54.690 "name": "prep_upgrade_on_shutdown", 00:26:54.690 "value": false, 00:26:54.690 "unit": "", 00:26:54.690 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:54.690 } 00:26:54.690 ] 00:26:54.690 } 00:26:54.949 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:54.949 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:54.949 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:55.207 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:55.207 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:55.207 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:55.207 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:55.207 08:26:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:55.467 Validate MD5 checksum, iteration 1 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:55.467 08:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:55.467 [2024-11-17 08:27:00.324760] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:55.467 [2024-11-17 08:27:00.324902] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80849 ] 00:26:55.727 [2024-11-17 08:27:00.486124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.727 [2024-11-17 08:27:00.568908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.105  [2024-11-17T08:27:03.495Z] Copying: 479/1024 [MB] (479 MBps) [2024-11-17T08:27:03.495Z] Copying: 943/1024 [MB] (464 MBps) [2024-11-17T08:27:05.398Z] Copying: 1024/1024 [MB] (average 471 MBps) 00:27:00.386 00:27:00.386 08:27:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:00.386 08:27:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:01.762 Validate MD5 checksum, iteration 2 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=27755c57df2df43be5d890199af449c5 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 27755c57df2df43be5d890199af449c5 != \2\7\7\5\5\c\5\7\d\f\2\d\f\4\3\b\e\5\d\8\9\0\1\9\9\a\f\4\4\9\c\5 ]] 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:01.762 08:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:02.021 [2024-11-17 08:27:06.821145] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:02.021 [2024-11-17 08:27:06.821331] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80913 ] 00:27:02.021 [2024-11-17 08:27:07.001824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.280 [2024-11-17 08:27:07.125760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.656  [2024-11-17T08:27:10.047Z] Copying: 482/1024 [MB] (482 MBps) [2024-11-17T08:27:10.047Z] Copying: 970/1024 [MB] (488 MBps) [2024-11-17T08:27:11.002Z] Copying: 1024/1024 [MB] (average 485 MBps) 00:27:05.990 00:27:05.990 08:27:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:05.990 08:27:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1f0952778f016b62af8fdeeb0a2032e 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1f0952778f016b62af8fdeeb0a2032e != \f\1\f\0\9\5\2\7\7\8\f\0\1\6\b\6\2\a\f\8\f\d\e\e\b\0\a\2\0\3\2\e ]] 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80770 ]] 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80770 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80980 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80980 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80980 ']' 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.909 08:27:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:07.909 [2024-11-17 08:27:12.638909] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:07.909 [2024-11-17 08:27:12.639052] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80980 ] 00:27:07.909 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 80770 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:07.909 [2024-11-17 08:27:12.801475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.909 [2024-11-17 08:27:12.879979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.848 [2024-11-17 08:27:13.606462] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:08.848 [2024-11-17 08:27:13.606705] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:08.848 [2024-11-17 08:27:13.751576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.751762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:08.848 [2024-11-17 08:27:13.751790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:08.848 [2024-11-17 08:27:13.751803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.751877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.751894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:08.848 [2024-11-17 08:27:13.751905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:08.848 [2024-11-17 08:27:13.751914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.751953] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:08.848 [2024-11-17 08:27:13.752791] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:08.848 [2024-11-17 08:27:13.752818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.752828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:08.848 [2024-11-17 08:27:13.752839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.879 ms 00:27:08.848 [2024-11-17 08:27:13.752848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.753286] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:08.848 [2024-11-17 08:27:13.769617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.769656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:08.848 [2024-11-17 08:27:13.769688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.333 ms 00:27:08.848 [2024-11-17 08:27:13.769697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.781542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.781703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:08.848 [2024-11-17 08:27:13.781824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:08.848 [2024-11-17 08:27:13.781927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.782399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.782557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:08.848 [2024-11-17 08:27:13.782579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:27:08.848 [2024-11-17 08:27:13.782589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.782664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.782684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:08.848 [2024-11-17 08:27:13.782694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:08.848 [2024-11-17 08:27:13.782703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.782736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.782749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:08.848 [2024-11-17 08:27:13.782759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:08.848 [2024-11-17 08:27:13.782768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.782796] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:08.848 [2024-11-17 08:27:13.786225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.786369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:08.848 [2024-11-17 08:27:13.786478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.435 ms 00:27:08.848 [2024-11-17 08:27:13.786602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.786681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.786791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:08.848 [2024-11-17 08:27:13.786885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:08.848 [2024-11-17 08:27:13.786928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.787065] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:08.848 [2024-11-17 08:27:13.787213] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:08.848 [2024-11-17 08:27:13.787464] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:08.848 [2024-11-17 08:27:13.787616] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:08.848 [2024-11-17 08:27:13.787900] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:08.848 [2024-11-17 08:27:13.788025] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:08.848 [2024-11-17 08:27:13.788174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:08.848 [2024-11-17 08:27:13.788308] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:08.848 [2024-11-17 08:27:13.788372] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:08.848 [2024-11-17 08:27:13.788491] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:08.848 [2024-11-17 08:27:13.788531] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:08.848 [2024-11-17 08:27:13.788640] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:08.848 [2024-11-17 08:27:13.788727] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:08.848 [2024-11-17 08:27:13.788772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.788817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:08.848 [2024-11-17 08:27:13.788921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.710 ms 00:27:08.848 [2024-11-17 08:27:13.788963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.789080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.848 [2024-11-17 08:27:13.789261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:08.848 [2024-11-17 08:27:13.789310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:27:08.848 [2024-11-17 08:27:13.789346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.848 [2024-11-17 08:27:13.789524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:08.848 [2024-11-17 08:27:13.789569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:08.848 [2024-11-17 08:27:13.789674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:08.848 [2024-11-17 08:27:13.789721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.848 [2024-11-17 08:27:13.789809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:08.848 [2024-11-17 08:27:13.789916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:08.848 [2024-11-17 08:27:13.790012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:08.848 [2024-11-17 08:27:13.790118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:08.848 [2024-11-17 08:27:13.790164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:08.848 [2024-11-17 08:27:13.790251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.848 [2024-11-17 08:27:13.790271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:08.848 [2024-11-17 08:27:13.790282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:08.848 [2024-11-17 08:27:13.790291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.848 [2024-11-17 08:27:13.790299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:08.848 [2024-11-17 08:27:13.790308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:08.848 [2024-11-17 08:27:13.790316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.848 [2024-11-17 08:27:13.790325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:08.848 [2024-11-17 08:27:13.790334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:08.849 [2024-11-17 08:27:13.790342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.849 [2024-11-17 08:27:13.790351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:08.849 [2024-11-17 08:27:13.790360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:08.849 [2024-11-17 08:27:13.790368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.849 [2024-11-17 08:27:13.790387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:08.849 [2024-11-17 08:27:13.790415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:08.849 [2024-11-17 08:27:13.790426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.849 [2024-11-17 08:27:13.790435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:08.849 [2024-11-17 08:27:13.790446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:08.849 [2024-11-17 08:27:13.790455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.849 [2024-11-17 08:27:13.790464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:08.849 [2024-11-17 08:27:13.790473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:08.849 [2024-11-17 08:27:13.790481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:08.849 [2024-11-17 08:27:13.790490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:08.849 [2024-11-17 08:27:13.790498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:08.849 [2024-11-17 08:27:13.790507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.849 [2024-11-17 08:27:13.790515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:08.849 [2024-11-17 08:27:13.790524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:08.849 [2024-11-17 08:27:13.790533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.849 [2024-11-17 08:27:13.790541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:08.849 [2024-11-17 08:27:13.790550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:08.849 [2024-11-17 08:27:13.790560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.849 [2024-11-17 08:27:13.790569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:08.849 [2024-11-17 08:27:13.790577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:08.849 [2024-11-17 08:27:13.790586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.849 [2024-11-17 08:27:13.790594] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:08.849 [2024-11-17 08:27:13.790604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:08.849 [2024-11-17 08:27:13.790613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:08.849 [2024-11-17 08:27:13.790624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:08.849 [2024-11-17 08:27:13.790634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:08.849 [2024-11-17 08:27:13.790642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:08.849 [2024-11-17 08:27:13.790651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:08.849 [2024-11-17 08:27:13.790660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:08.849 [2024-11-17 08:27:13.790668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:08.849 [2024-11-17 08:27:13.790677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:08.849 [2024-11-17 08:27:13.790687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:08.849 [2024-11-17 08:27:13.790700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:08.849 [2024-11-17 08:27:13.790722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:08.849 [2024-11-17 08:27:13.790751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:08.849 [2024-11-17 08:27:13.790761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:08.849 [2024-11-17 08:27:13.790771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:08.849 [2024-11-17 08:27:13.790781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:08.849 [2024-11-17 08:27:13.790848] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:08.849 [2024-11-17 08:27:13.790859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:08.849 [2024-11-17 08:27:13.790881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:08.849 [2024-11-17 08:27:13.790890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:08.849 [2024-11-17 08:27:13.790901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:08.849 [2024-11-17 08:27:13.790912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.849 [2024-11-17 08:27:13.790928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:08.849 [2024-11-17 08:27:13.790938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.448 ms 00:27:08.849 [2024-11-17 08:27:13.790947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.849 [2024-11-17 08:27:13.817834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.849 [2024-11-17 08:27:13.818069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:08.849 [2024-11-17 08:27:13.818222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.799 ms 00:27:08.849 [2024-11-17 08:27:13.818353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.849 [2024-11-17 08:27:13.818449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.849 [2024-11-17 08:27:13.818610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:08.849 [2024-11-17 08:27:13.818731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:08.849 [2024-11-17 08:27:13.818779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.849 [2024-11-17 08:27:13.850412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.849 [2024-11-17 08:27:13.850452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:08.849 [2024-11-17 08:27:13.850467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.524 ms 00:27:08.849 [2024-11-17 08:27:13.850476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.849 [2024-11-17 08:27:13.850520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.849 [2024-11-17 08:27:13.850535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:08.849 [2024-11-17 08:27:13.850545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:08.849 [2024-11-17 08:27:13.850553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.849 [2024-11-17 08:27:13.850697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.849 [2024-11-17 08:27:13.850713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:08.849 [2024-11-17 08:27:13.850724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:27:08.849 [2024-11-17 08:27:13.850733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.849 [2024-11-17 08:27:13.850779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.849 [2024-11-17 08:27:13.850792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:08.849 [2024-11-17 08:27:13.850802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:08.849 [2024-11-17 08:27:13.850810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.866926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.867119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:09.109 [2024-11-17 08:27:13.867146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.088 ms 00:27:09.109 [2024-11-17 08:27:13.867158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.867301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.867334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:09.109 [2024-11-17 08:27:13.867349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:09.109 [2024-11-17 08:27:13.867358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.891709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.891749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:09.109 [2024-11-17 08:27:13.891765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.312 ms 00:27:09.109 [2024-11-17 08:27:13.891775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.901561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.901597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:09.109 [2024-11-17 08:27:13.901620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:27:09.109 [2024-11-17 08:27:13.901630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.958207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.958503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:09.109 [2024-11-17 08:27:13.958540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.512 ms 00:27:09.109 [2024-11-17 08:27:13.958552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.958745] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:09.109 [2024-11-17 08:27:13.958889] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:09.109 [2024-11-17 08:27:13.959006] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:09.109 [2024-11-17 08:27:13.959177] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:09.109 [2024-11-17 08:27:13.959194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.959203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:09.109 [2024-11-17 08:27:13.959214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.582 ms 00:27:09.109 [2024-11-17 08:27:13.959223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.959347] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:09.109 [2024-11-17 08:27:13.959367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.959381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:09.109 [2024-11-17 08:27:13.959392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:09.109 [2024-11-17 08:27:13.959401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.974855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.974895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:09.109 [2024-11-17 08:27:13.974910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.365 ms 00:27:09.109 [2024-11-17 08:27:13.974920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.984211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.984246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:09.109 [2024-11-17 08:27:13.984275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:09.109 [2024-11-17 08:27:13.984285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.109 [2024-11-17 08:27:13.984395] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:09.109 [2024-11-17 08:27:13.984531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.109 [2024-11-17 08:27:13.984548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:09.109 [2024-11-17 08:27:13.984558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.139 ms 00:27:09.109 [2024-11-17 08:27:13.984567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.679 [2024-11-17 08:27:14.580152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.679 [2024-11-17 08:27:14.580434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:09.679 [2024-11-17 08:27:14.580465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 594.664 ms 00:27:09.679 [2024-11-17 08:27:14.580477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.679 [2024-11-17 08:27:14.584947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.679 [2024-11-17 08:27:14.584990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:09.679 [2024-11-17 08:27:14.585021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:27:09.679 [2024-11-17 08:27:14.585032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.679 [2024-11-17 08:27:14.585653] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:09.679 [2024-11-17 08:27:14.585698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.679 [2024-11-17 08:27:14.585742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:09.679 [2024-11-17 08:27:14.585754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:27:09.679 [2024-11-17 08:27:14.585764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.679 [2024-11-17 08:27:14.585807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.679 [2024-11-17 08:27:14.585839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:09.679 [2024-11-17 08:27:14.585850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:09.679 [2024-11-17 08:27:14.585860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.679 [2024-11-17 08:27:14.585959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 601.519 ms, result 0 00:27:09.679 [2024-11-17 08:27:14.586012] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:09.679 [2024-11-17 08:27:14.586091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.679 [2024-11-17 08:27:14.586104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:09.679 [2024-11-17 08:27:14.586114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:27:09.679 [2024-11-17 08:27:14.586137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.195136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.195246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:10.248 [2024-11-17 08:27:15.195265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 608.075 ms 00:27:10.248 [2024-11-17 08:27:15.195287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.199758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.199949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:10.248 [2024-11-17 08:27:15.200090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.868 ms 00:27:10.248 [2024-11-17 08:27:15.200208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.200727] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:10.248 [2024-11-17 08:27:15.200937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.201045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:10.248 [2024-11-17 08:27:15.201110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.645 ms 00:27:10.248 [2024-11-17 08:27:15.201215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.201299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.201400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:10.248 [2024-11-17 08:27:15.201498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:10.248 [2024-11-17 08:27:15.201542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.201604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 615.586 ms, result 0 00:27:10.248 [2024-11-17 08:27:15.201655] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:10.248 [2024-11-17 08:27:15.201670] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:10.248 [2024-11-17 08:27:15.201682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.201692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:10.248 [2024-11-17 08:27:15.201703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1217.316 ms 00:27:10.248 [2024-11-17 08:27:15.201712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.201748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.201761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:10.248 [2024-11-17 08:27:15.201777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:10.248 [2024-11-17 08:27:15.201787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.212406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:10.248 [2024-11-17 08:27:15.212566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.212584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:10.248 [2024-11-17 08:27:15.212596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.759 ms 00:27:10.248 [2024-11-17 08:27:15.212606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.213266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.213299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:10.248 [2024-11-17 08:27:15.213317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:27:10.248 [2024-11-17 08:27:15.213328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.215428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.215616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:10.248 [2024-11-17 08:27:15.215641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.072 ms 00:27:10.248 [2024-11-17 08:27:15.215652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.215723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.215741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:10.248 [2024-11-17 08:27:15.215753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:10.248 [2024-11-17 08:27:15.215768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.215881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.215911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:10.248 [2024-11-17 08:27:15.215922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:10.248 [2024-11-17 08:27:15.215931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.215955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.215967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:10.248 [2024-11-17 08:27:15.215977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:10.248 [2024-11-17 08:27:15.215986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.216024] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:10.248 [2024-11-17 08:27:15.216041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.216051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:10.248 [2024-11-17 08:27:15.216062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:10.248 [2024-11-17 08:27:15.216071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.216164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:10.248 [2024-11-17 08:27:15.216182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:10.248 [2024-11-17 08:27:15.216193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:27:10.248 [2024-11-17 08:27:15.216203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:10.248 [2024-11-17 08:27:15.217551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1465.478 ms, result 0 00:27:10.248 [2024-11-17 08:27:15.232983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.248 [2024-11-17 08:27:15.248976] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:10.248 [2024-11-17 08:27:15.257447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:10.508 Validate MD5 checksum, iteration 1 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:10.508 08:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:10.508 [2024-11-17 08:27:15.374521] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:10.508 [2024-11-17 08:27:15.374910] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81015 ] 00:27:10.767 [2024-11-17 08:27:15.547234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.767 [2024-11-17 08:27:15.670531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.673  [2024-11-17T08:27:18.252Z] Copying: 484/1024 [MB] (484 MBps) [2024-11-17T08:27:18.510Z] Copying: 968/1024 [MB] (484 MBps) [2024-11-17T08:27:20.413Z] Copying: 1024/1024 [MB] (average 483 MBps) 00:27:15.401 00:27:15.401 08:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:15.401 08:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:17.308 Validate MD5 checksum, iteration 2 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=27755c57df2df43be5d890199af449c5 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 27755c57df2df43be5d890199af449c5 != \2\7\7\5\5\c\5\7\d\f\2\d\f\4\3\b\e\5\d\8\9\0\1\9\9\a\f\4\4\9\c\5 ]] 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:17.308 08:27:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:17.567 [2024-11-17 08:27:22.332667] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:17.567 [2024-11-17 08:27:22.333014] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81082 ] 00:27:17.567 [2024-11-17 08:27:22.515845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.827 [2024-11-17 08:27:22.640440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.209  [2024-11-17T08:27:25.159Z] Copying: 492/1024 [MB] (492 MBps) [2024-11-17T08:27:25.419Z] Copying: 971/1024 [MB] (479 MBps) [2024-11-17T08:27:27.323Z] Copying: 1024/1024 [MB] (average 485 MBps) 00:27:22.311 00:27:22.311 08:27:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:22.311 08:27:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1f0952778f016b62af8fdeeb0a2032e 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1f0952778f016b62af8fdeeb0a2032e != \f\1\f\0\9\5\2\7\7\8\f\0\1\6\b\6\2\a\f\8\f\d\e\e\b\0\a\2\0\3\2\e ]] 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80980 ]] 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80980 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80980 ']' 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80980 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:24.224 08:27:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80980 00:27:24.224 killing process with pid 80980 00:27:24.224 08:27:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:24.224 08:27:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:24.224 08:27:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80980' 00:27:24.224 08:27:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80980 00:27:24.224 08:27:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80980 00:27:24.791 [2024-11-17 08:27:29.751521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:24.791 [2024-11-17 08:27:29.766448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.766490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:24.791 [2024-11-17 08:27:29.766507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:24.791 [2024-11-17 08:27:29.766517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.766543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:24.791 [2024-11-17 08:27:29.769577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.769828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:24.791 [2024-11-17 08:27:29.769854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.015 ms 00:27:24.791 [2024-11-17 08:27:29.769875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.770156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.770206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:24.791 [2024-11-17 08:27:29.770218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:27:24.791 [2024-11-17 08:27:29.770227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.771511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.771565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:24.791 [2024-11-17 08:27:29.771581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.263 ms 00:27:24.791 [2024-11-17 08:27:29.771593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.772819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.773003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:24.791 [2024-11-17 08:27:29.773042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:27:24.791 [2024-11-17 08:27:29.773054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.783404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.783630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:24.791 [2024-11-17 08:27:29.783660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.245 ms 00:27:24.791 [2024-11-17 08:27:29.783680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.789414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.789450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:24.791 [2024-11-17 08:27:29.789463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.687 ms 00:27:24.791 [2024-11-17 08:27:29.789473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.789542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.791 [2024-11-17 08:27:29.789559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:24.791 [2024-11-17 08:27:29.789569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:24.791 [2024-11-17 08:27:29.789577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.791 [2024-11-17 08:27:29.800113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.051 [2024-11-17 08:27:29.800357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:25.051 [2024-11-17 08:27:29.800387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.512 ms 00:27:25.051 [2024-11-17 08:27:29.800398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.051 [2024-11-17 08:27:29.811230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.051 [2024-11-17 08:27:29.811397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:25.051 [2024-11-17 08:27:29.811422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.788 ms 00:27:25.051 [2024-11-17 08:27:29.811433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.051 [2024-11-17 08:27:29.821615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.051 [2024-11-17 08:27:29.821650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:25.051 [2024-11-17 08:27:29.821663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.097 ms 00:27:25.051 [2024-11-17 08:27:29.821671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.051 [2024-11-17 08:27:29.831795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.051 [2024-11-17 08:27:29.831830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:25.051 [2024-11-17 08:27:29.831843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.063 ms 00:27:25.052 [2024-11-17 08:27:29.831851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.831886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:25.052 [2024-11-17 08:27:29.831906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:25.052 [2024-11-17 08:27:29.831918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:25.052 [2024-11-17 08:27:29.831927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:25.052 [2024-11-17 08:27:29.831937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.831946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.831955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.831965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.831974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.831983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.831992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.052 [2024-11-17 08:27:29.832075] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:25.052 [2024-11-17 08:27:29.832117] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c55b0063-0f06-4dc7-9148-051e9c1135b4 00:27:25.052 [2024-11-17 08:27:29.832129] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:25.052 [2024-11-17 08:27:29.832137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:25.052 [2024-11-17 08:27:29.832146] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:25.052 [2024-11-17 08:27:29.832155] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:25.052 [2024-11-17 08:27:29.832163] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:25.052 [2024-11-17 08:27:29.832184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:25.052 [2024-11-17 08:27:29.832193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:25.052 [2024-11-17 08:27:29.832217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:25.052 [2024-11-17 08:27:29.832225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:25.052 [2024-11-17 08:27:29.832237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.052 [2024-11-17 08:27:29.832253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:25.052 [2024-11-17 08:27:29.832263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.351 ms 00:27:25.052 [2024-11-17 08:27:29.832273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.847365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.052 [2024-11-17 08:27:29.847398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:25.052 [2024-11-17 08:27:29.847412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.071 ms 00:27:25.052 [2024-11-17 08:27:29.847422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.847869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.052 [2024-11-17 08:27:29.847885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:25.052 [2024-11-17 08:27:29.847895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.398 ms 00:27:25.052 [2024-11-17 08:27:29.847903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.894244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:29.894287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:25.052 [2024-11-17 08:27:29.894300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.052 [2024-11-17 08:27:29.894309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.894353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:29.894366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:25.052 [2024-11-17 08:27:29.894376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.052 [2024-11-17 08:27:29.894384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.894485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:29.894519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:25.052 [2024-11-17 08:27:29.894529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.052 [2024-11-17 08:27:29.894539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.894559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:29.894578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:25.052 [2024-11-17 08:27:29.894587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.052 [2024-11-17 08:27:29.894596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:29.973218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:29.973274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:25.052 [2024-11-17 08:27:29.973290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.052 [2024-11-17 08:27:29.973299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:30.043940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:30.044272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:25.052 [2024-11-17 08:27:30.044300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.052 [2024-11-17 08:27:30.044312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:30.044418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:30.044453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:25.052 [2024-11-17 08:27:30.044464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.052 [2024-11-17 08:27:30.044475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.052 [2024-11-17 08:27:30.044552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.052 [2024-11-17 08:27:30.044569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:25.052 [2024-11-17 08:27:30.044618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.053 [2024-11-17 08:27:30.044654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.053 [2024-11-17 08:27:30.044775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.053 [2024-11-17 08:27:30.044808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:25.053 [2024-11-17 08:27:30.044835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.053 [2024-11-17 08:27:30.044844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.053 [2024-11-17 08:27:30.044893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.053 [2024-11-17 08:27:30.044910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:25.053 [2024-11-17 08:27:30.044920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.053 [2024-11-17 08:27:30.044935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.053 [2024-11-17 08:27:30.044990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.053 [2024-11-17 08:27:30.045003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:25.053 [2024-11-17 08:27:30.045013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.053 [2024-11-17 08:27:30.045021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.053 [2024-11-17 08:27:30.045064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:25.053 [2024-11-17 08:27:30.045078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:25.053 [2024-11-17 08:27:30.045091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:25.053 [2024-11-17 08:27:30.045100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.053 [2024-11-17 08:27:30.045309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 278.809 ms, result 0 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:25.991 Remove shared memory files 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80770 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:25.991 ************************************ 00:27:25.991 END TEST ftl_upgrade_shutdown 00:27:25.991 ************************************ 00:27:25.991 00:27:25.991 real 1m25.290s 00:27:25.991 user 2m0.952s 00:27:25.991 sys 0m21.667s 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.991 08:27:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:25.991 08:27:30 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:25.991 08:27:30 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:25.991 08:27:31 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:27:26.252 08:27:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.252 08:27:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:26.252 ************************************ 00:27:26.252 START TEST ftl_restore_fast 00:27:26.252 ************************************ 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:26.252 * Looking for test storage... 00:27:26.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:26.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.252 --rc genhtml_branch_coverage=1 00:27:26.252 --rc genhtml_function_coverage=1 00:27:26.252 --rc genhtml_legend=1 00:27:26.252 --rc geninfo_all_blocks=1 00:27:26.252 --rc geninfo_unexecuted_blocks=1 00:27:26.252 00:27:26.252 ' 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:26.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.252 --rc genhtml_branch_coverage=1 00:27:26.252 --rc genhtml_function_coverage=1 00:27:26.252 --rc genhtml_legend=1 00:27:26.252 --rc geninfo_all_blocks=1 00:27:26.252 --rc geninfo_unexecuted_blocks=1 00:27:26.252 00:27:26.252 ' 00:27:26.252 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:26.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.252 --rc genhtml_branch_coverage=1 00:27:26.252 --rc genhtml_function_coverage=1 00:27:26.253 --rc genhtml_legend=1 00:27:26.253 --rc geninfo_all_blocks=1 00:27:26.253 --rc geninfo_unexecuted_blocks=1 00:27:26.253 00:27:26.253 ' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:26.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.253 --rc genhtml_branch_coverage=1 00:27:26.253 --rc genhtml_function_coverage=1 00:27:26.253 --rc genhtml_legend=1 00:27:26.253 --rc geninfo_all_blocks=1 00:27:26.253 --rc geninfo_unexecuted_blocks=1 00:27:26.253 00:27:26.253 ' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.t2sDmeTk7b 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=81252 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 81252 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 81252 ']' 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.253 08:27:31 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:26.513 [2024-11-17 08:27:31.362323] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:26.513 [2024-11-17 08:27:31.362734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81252 ] 00:27:26.772 [2024-11-17 08:27:31.541404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.772 [2024-11-17 08:27:31.622436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:27.340 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:27.599 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:27.858 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:27.858 { 00:27:27.858 "name": "nvme0n1", 00:27:27.858 "aliases": [ 00:27:27.858 "2c6395c2-f5a0-45b6-843d-bb3516736613" 00:27:27.858 ], 00:27:27.858 "product_name": "NVMe disk", 00:27:27.858 "block_size": 4096, 00:27:27.858 "num_blocks": 1310720, 00:27:27.858 "uuid": "2c6395c2-f5a0-45b6-843d-bb3516736613", 00:27:27.858 "numa_id": -1, 00:27:27.858 "assigned_rate_limits": { 00:27:27.858 "rw_ios_per_sec": 0, 00:27:27.858 "rw_mbytes_per_sec": 0, 00:27:27.858 "r_mbytes_per_sec": 0, 00:27:27.858 "w_mbytes_per_sec": 0 00:27:27.858 }, 00:27:27.858 "claimed": true, 00:27:27.858 "claim_type": "read_many_write_one", 00:27:27.858 "zoned": false, 00:27:27.858 "supported_io_types": { 00:27:27.858 "read": true, 00:27:27.858 "write": true, 00:27:27.858 "unmap": true, 00:27:27.858 "flush": true, 00:27:27.858 "reset": true, 00:27:27.858 "nvme_admin": true, 00:27:27.858 "nvme_io": true, 00:27:27.858 "nvme_io_md": false, 00:27:27.858 "write_zeroes": true, 00:27:27.858 "zcopy": false, 00:27:27.858 "get_zone_info": false, 00:27:27.858 "zone_management": false, 00:27:27.858 "zone_append": false, 00:27:27.858 "compare": true, 00:27:27.858 "compare_and_write": false, 00:27:27.858 "abort": true, 00:27:27.858 "seek_hole": false, 00:27:27.858 "seek_data": false, 00:27:27.858 "copy": true, 00:27:27.858 "nvme_iov_md": false 00:27:27.858 }, 00:27:27.858 "driver_specific": { 00:27:27.858 "nvme": [ 00:27:27.858 { 00:27:27.858 "pci_address": "0000:00:11.0", 00:27:27.858 "trid": { 00:27:27.858 "trtype": "PCIe", 00:27:27.858 "traddr": "0000:00:11.0" 00:27:27.858 }, 00:27:27.858 "ctrlr_data": { 00:27:27.858 "cntlid": 0, 00:27:27.858 "vendor_id": "0x1b36", 00:27:27.858 "model_number": "QEMU NVMe Ctrl", 00:27:27.858 "serial_number": "12341", 00:27:27.858 "firmware_revision": "8.0.0", 00:27:27.858 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:27.858 "oacs": { 00:27:27.858 "security": 0, 00:27:27.858 "format": 1, 00:27:27.858 "firmware": 0, 00:27:27.858 "ns_manage": 1 00:27:27.858 }, 00:27:27.858 "multi_ctrlr": false, 00:27:27.858 "ana_reporting": false 00:27:27.858 }, 00:27:27.858 "vs": { 00:27:27.858 "nvme_version": "1.4" 00:27:27.858 }, 00:27:27.858 "ns_data": { 00:27:27.858 "id": 1, 00:27:27.858 "can_share": false 00:27:27.858 } 00:27:27.858 } 00:27:27.858 ], 00:27:27.858 "mp_policy": "active_passive" 00:27:27.858 } 00:27:27.858 } 00:27:27.858 ]' 00:27:27.858 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:27.858 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:27.859 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:28.118 08:27:32 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:28.377 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=eae717dc-7911-4172-8efa-c8f17872f5d0 00:27:28.377 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:28.377 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eae717dc-7911-4172-8efa-c8f17872f5d0 00:27:28.377 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=838605d8-88d5-4e53-b5a7-1dea96c9a0d1 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 838605d8-88d5-4e53-b5a7-1dea96c9a0d1 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:28.946 08:27:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:29.205 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:29.205 { 00:27:29.205 "name": "c668e88e-eeee-4fe7-9e4f-1691ade0ba75", 00:27:29.205 "aliases": [ 00:27:29.205 "lvs/nvme0n1p0" 00:27:29.205 ], 00:27:29.205 "product_name": "Logical Volume", 00:27:29.205 "block_size": 4096, 00:27:29.205 "num_blocks": 26476544, 00:27:29.205 "uuid": "c668e88e-eeee-4fe7-9e4f-1691ade0ba75", 00:27:29.205 "assigned_rate_limits": { 00:27:29.205 "rw_ios_per_sec": 0, 00:27:29.205 "rw_mbytes_per_sec": 0, 00:27:29.205 "r_mbytes_per_sec": 0, 00:27:29.205 "w_mbytes_per_sec": 0 00:27:29.205 }, 00:27:29.205 "claimed": false, 00:27:29.205 "zoned": false, 00:27:29.205 "supported_io_types": { 00:27:29.205 "read": true, 00:27:29.205 "write": true, 00:27:29.205 "unmap": true, 00:27:29.205 "flush": false, 00:27:29.205 "reset": true, 00:27:29.205 "nvme_admin": false, 00:27:29.205 "nvme_io": false, 00:27:29.206 "nvme_io_md": false, 00:27:29.206 "write_zeroes": true, 00:27:29.206 "zcopy": false, 00:27:29.206 "get_zone_info": false, 00:27:29.206 "zone_management": false, 00:27:29.206 "zone_append": false, 00:27:29.206 "compare": false, 00:27:29.206 "compare_and_write": false, 00:27:29.206 "abort": false, 00:27:29.206 "seek_hole": true, 00:27:29.206 "seek_data": true, 00:27:29.206 "copy": false, 00:27:29.206 "nvme_iov_md": false 00:27:29.206 }, 00:27:29.206 "driver_specific": { 00:27:29.206 "lvol": { 00:27:29.206 "lvol_store_uuid": "838605d8-88d5-4e53-b5a7-1dea96c9a0d1", 00:27:29.206 "base_bdev": "nvme0n1", 00:27:29.206 "thin_provision": true, 00:27:29.206 "num_allocated_clusters": 0, 00:27:29.206 "snapshot": false, 00:27:29.206 "clone": false, 00:27:29.206 "esnap_clone": false 00:27:29.206 } 00:27:29.206 } 00:27:29.206 } 00:27:29.206 ]' 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:29.206 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:29.774 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:30.033 { 00:27:30.033 "name": "c668e88e-eeee-4fe7-9e4f-1691ade0ba75", 00:27:30.033 "aliases": [ 00:27:30.033 "lvs/nvme0n1p0" 00:27:30.033 ], 00:27:30.033 "product_name": "Logical Volume", 00:27:30.033 "block_size": 4096, 00:27:30.033 "num_blocks": 26476544, 00:27:30.033 "uuid": "c668e88e-eeee-4fe7-9e4f-1691ade0ba75", 00:27:30.033 "assigned_rate_limits": { 00:27:30.033 "rw_ios_per_sec": 0, 00:27:30.033 "rw_mbytes_per_sec": 0, 00:27:30.033 "r_mbytes_per_sec": 0, 00:27:30.033 "w_mbytes_per_sec": 0 00:27:30.033 }, 00:27:30.033 "claimed": false, 00:27:30.033 "zoned": false, 00:27:30.033 "supported_io_types": { 00:27:30.033 "read": true, 00:27:30.033 "write": true, 00:27:30.033 "unmap": true, 00:27:30.033 "flush": false, 00:27:30.033 "reset": true, 00:27:30.033 "nvme_admin": false, 00:27:30.033 "nvme_io": false, 00:27:30.033 "nvme_io_md": false, 00:27:30.033 "write_zeroes": true, 00:27:30.033 "zcopy": false, 00:27:30.033 "get_zone_info": false, 00:27:30.033 "zone_management": false, 00:27:30.033 "zone_append": false, 00:27:30.033 "compare": false, 00:27:30.033 "compare_and_write": false, 00:27:30.033 "abort": false, 00:27:30.033 "seek_hole": true, 00:27:30.033 "seek_data": true, 00:27:30.033 "copy": false, 00:27:30.033 "nvme_iov_md": false 00:27:30.033 }, 00:27:30.033 "driver_specific": { 00:27:30.033 "lvol": { 00:27:30.033 "lvol_store_uuid": "838605d8-88d5-4e53-b5a7-1dea96c9a0d1", 00:27:30.033 "base_bdev": "nvme0n1", 00:27:30.033 "thin_provision": true, 00:27:30.033 "num_allocated_clusters": 0, 00:27:30.033 "snapshot": false, 00:27:30.033 "clone": false, 00:27:30.033 "esnap_clone": false 00:27:30.033 } 00:27:30.033 } 00:27:30.033 } 00:27:30.033 ]' 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:30.033 08:27:34 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:30.292 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:30.292 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:30.292 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:30.292 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:30.292 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:27:30.292 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:27:30.292 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c668e88e-eeee-4fe7-9e4f-1691ade0ba75 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:30.552 { 00:27:30.552 "name": "c668e88e-eeee-4fe7-9e4f-1691ade0ba75", 00:27:30.552 "aliases": [ 00:27:30.552 "lvs/nvme0n1p0" 00:27:30.552 ], 00:27:30.552 "product_name": "Logical Volume", 00:27:30.552 "block_size": 4096, 00:27:30.552 "num_blocks": 26476544, 00:27:30.552 "uuid": "c668e88e-eeee-4fe7-9e4f-1691ade0ba75", 00:27:30.552 "assigned_rate_limits": { 00:27:30.552 "rw_ios_per_sec": 0, 00:27:30.552 "rw_mbytes_per_sec": 0, 00:27:30.552 "r_mbytes_per_sec": 0, 00:27:30.552 "w_mbytes_per_sec": 0 00:27:30.552 }, 00:27:30.552 "claimed": false, 00:27:30.552 "zoned": false, 00:27:30.552 "supported_io_types": { 00:27:30.552 "read": true, 00:27:30.552 "write": true, 00:27:30.552 "unmap": true, 00:27:30.552 "flush": false, 00:27:30.552 "reset": true, 00:27:30.552 "nvme_admin": false, 00:27:30.552 "nvme_io": false, 00:27:30.552 "nvme_io_md": false, 00:27:30.552 "write_zeroes": true, 00:27:30.552 "zcopy": false, 00:27:30.552 "get_zone_info": false, 00:27:30.552 "zone_management": false, 00:27:30.552 "zone_append": false, 00:27:30.552 "compare": false, 00:27:30.552 "compare_and_write": false, 00:27:30.552 "abort": false, 00:27:30.552 "seek_hole": true, 00:27:30.552 "seek_data": true, 00:27:30.552 "copy": false, 00:27:30.552 "nvme_iov_md": false 00:27:30.552 }, 00:27:30.552 "driver_specific": { 00:27:30.552 "lvol": { 00:27:30.552 "lvol_store_uuid": "838605d8-88d5-4e53-b5a7-1dea96c9a0d1", 00:27:30.552 "base_bdev": "nvme0n1", 00:27:30.552 "thin_provision": true, 00:27:30.552 "num_allocated_clusters": 0, 00:27:30.552 "snapshot": false, 00:27:30.552 "clone": false, 00:27:30.552 "esnap_clone": false 00:27:30.552 } 00:27:30.552 } 00:27:30.552 } 00:27:30.552 ]' 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:27:30.552 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:30.553 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c668e88e-eeee-4fe7-9e4f-1691ade0ba75 --l2p_dram_limit 10' 00:27:30.553 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:30.553 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:30.553 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:30.553 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:30.553 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:30.553 08:27:35 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c668e88e-eeee-4fe7-9e4f-1691ade0ba75 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:30.813 [2024-11-17 08:27:35.776265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.776316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:30.813 [2024-11-17 08:27:35.776337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:30.813 [2024-11-17 08:27:35.776349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.776419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.776435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:30.813 [2024-11-17 08:27:35.776447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:30.813 [2024-11-17 08:27:35.776456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.776490] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:30.813 [2024-11-17 08:27:35.777343] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:30.813 [2024-11-17 08:27:35.777391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.777406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:30.813 [2024-11-17 08:27:35.777419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:27:30.813 [2024-11-17 08:27:35.777430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.777588] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ef9185a3-0142-4f78-a994-a1072823e554 00:27:30.813 [2024-11-17 08:27:35.778546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.778586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:30.813 [2024-11-17 08:27:35.778601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:30.813 [2024-11-17 08:27:35.778612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.782666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.782712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:30.813 [2024-11-17 08:27:35.782726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.009 ms 00:27:30.813 [2024-11-17 08:27:35.782737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.782834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.782854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:30.813 [2024-11-17 08:27:35.782865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:30.813 [2024-11-17 08:27:35.782880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.782946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.782965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:30.813 [2024-11-17 08:27:35.782978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:30.813 [2024-11-17 08:27:35.782989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.783015] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:30.813 [2024-11-17 08:27:35.787026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.787061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:30.813 [2024-11-17 08:27:35.787121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.015 ms 00:27:30.813 [2024-11-17 08:27:35.787134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.787177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.787190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:30.813 [2024-11-17 08:27:35.787203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:30.813 [2024-11-17 08:27:35.787213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.787267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:30.813 [2024-11-17 08:27:35.787436] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:30.813 [2024-11-17 08:27:35.787493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:30.813 [2024-11-17 08:27:35.787509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:30.813 [2024-11-17 08:27:35.787526] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:30.813 [2024-11-17 08:27:35.787539] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:30.813 [2024-11-17 08:27:35.787552] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:30.813 [2024-11-17 08:27:35.787565] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:30.813 [2024-11-17 08:27:35.787577] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:30.813 [2024-11-17 08:27:35.787588] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:30.813 [2024-11-17 08:27:35.787601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.787611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:30.813 [2024-11-17 08:27:35.787624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:27:30.813 [2024-11-17 08:27:35.787647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.813 [2024-11-17 08:27:35.787741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.813 [2024-11-17 08:27:35.787755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:30.814 [2024-11-17 08:27:35.787769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:30.814 [2024-11-17 08:27:35.787780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.814 [2024-11-17 08:27:35.787902] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:30.814 [2024-11-17 08:27:35.787917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:30.814 [2024-11-17 08:27:35.787930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.814 [2024-11-17 08:27:35.787941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.814 [2024-11-17 08:27:35.787954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:30.814 [2024-11-17 08:27:35.787963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:30.814 [2024-11-17 08:27:35.787975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:30.814 [2024-11-17 08:27:35.787985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:30.814 [2024-11-17 08:27:35.787997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.814 [2024-11-17 08:27:35.788018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:30.814 [2024-11-17 08:27:35.788027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:30.814 [2024-11-17 08:27:35.788039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:30.814 [2024-11-17 08:27:35.788048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:30.814 [2024-11-17 08:27:35.788060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:30.814 [2024-11-17 08:27:35.788070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:30.814 [2024-11-17 08:27:35.788093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:30.814 [2024-11-17 08:27:35.788106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:30.814 [2024-11-17 08:27:35.788146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.814 [2024-11-17 08:27:35.788168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:30.814 [2024-11-17 08:27:35.788178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.814 [2024-11-17 08:27:35.788201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:30.814 [2024-11-17 08:27:35.788212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.814 [2024-11-17 08:27:35.788233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:30.814 [2024-11-17 08:27:35.788243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:30.814 [2024-11-17 08:27:35.788263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:30.814 [2024-11-17 08:27:35.788277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.814 [2024-11-17 08:27:35.788298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:30.814 [2024-11-17 08:27:35.788307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:30.814 [2024-11-17 08:27:35.788318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:30.814 [2024-11-17 08:27:35.788328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:30.814 [2024-11-17 08:27:35.788339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:30.814 [2024-11-17 08:27:35.788349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:30.814 [2024-11-17 08:27:35.788370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:30.814 [2024-11-17 08:27:35.788380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:30.814 [2024-11-17 08:27:35.788403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:30.814 [2024-11-17 08:27:35.788412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:30.814 [2024-11-17 08:27:35.788426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:30.814 [2024-11-17 08:27:35.788436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:30.814 [2024-11-17 08:27:35.788450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:30.814 [2024-11-17 08:27:35.788459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:30.814 [2024-11-17 08:27:35.788471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:30.814 [2024-11-17 08:27:35.788480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:30.814 [2024-11-17 08:27:35.788492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:30.814 [2024-11-17 08:27:35.788506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:30.814 [2024-11-17 08:27:35.788524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.814 [2024-11-17 08:27:35.788536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:30.814 [2024-11-17 08:27:35.788551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:30.814 [2024-11-17 08:27:35.788562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:30.814 [2024-11-17 08:27:35.788574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:30.814 [2024-11-17 08:27:35.788585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:30.814 [2024-11-17 08:27:35.788597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:30.814 [2024-11-17 08:27:35.788608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:30.814 [2024-11-17 08:27:35.788620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:30.814 [2024-11-17 08:27:35.788630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:30.814 [2024-11-17 08:27:35.788645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:30.814 [2024-11-17 08:27:35.788655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:30.814 [2024-11-17 08:27:35.788668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:30.814 [2024-11-17 08:27:35.788679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:30.814 [2024-11-17 08:27:35.788693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:30.814 [2024-11-17 08:27:35.788704] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:30.814 [2024-11-17 08:27:35.788718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.814 [2024-11-17 08:27:35.788729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.814 [2024-11-17 08:27:35.788741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:30.814 [2024-11-17 08:27:35.788752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:30.814 [2024-11-17 08:27:35.788764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:30.814 [2024-11-17 08:27:35.788776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:30.814 [2024-11-17 08:27:35.788789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:30.814 [2024-11-17 08:27:35.788800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:27:30.814 [2024-11-17 08:27:35.788812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:30.814 [2024-11-17 08:27:35.788861] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:30.814 [2024-11-17 08:27:35.788881] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:33.348 [2024-11-17 08:27:38.106032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.106139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:33.348 [2024-11-17 08:27:38.106161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2317.185 ms 00:27:33.348 [2024-11-17 08:27:38.106174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.135481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.135761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.348 [2024-11-17 08:27:38.135806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.088 ms 00:27:33.348 [2024-11-17 08:27:38.135824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.135987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.136009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:33.348 [2024-11-17 08:27:38.136022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:33.348 [2024-11-17 08:27:38.136040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.169150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.169212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.348 [2024-11-17 08:27:38.169229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.030 ms 00:27:33.348 [2024-11-17 08:27:38.169244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.169290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.169306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.348 [2024-11-17 08:27:38.169317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:33.348 [2024-11-17 08:27:38.169329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.169699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.169722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.348 [2024-11-17 08:27:38.169735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:27:33.348 [2024-11-17 08:27:38.169746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.169861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.169879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.348 [2024-11-17 08:27:38.169891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:33.348 [2024-11-17 08:27:38.169904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.184890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.185114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.348 [2024-11-17 08:27:38.185238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.965 ms 00:27:33.348 [2024-11-17 08:27:38.185292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.348 [2024-11-17 08:27:38.196615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:33.348 [2024-11-17 08:27:38.199301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.348 [2024-11-17 08:27:38.199522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:33.349 [2024-11-17 08:27:38.199655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.823 ms 00:27:33.349 [2024-11-17 08:27:38.199706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.349 [2024-11-17 08:27:38.267710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.349 [2024-11-17 08:27:38.267814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:33.349 [2024-11-17 08:27:38.267840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.843 ms 00:27:33.349 [2024-11-17 08:27:38.267853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.349 [2024-11-17 08:27:38.268108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.349 [2024-11-17 08:27:38.268164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:33.349 [2024-11-17 08:27:38.268181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:27:33.349 [2024-11-17 08:27:38.268191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.349 [2024-11-17 08:27:38.294007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.349 [2024-11-17 08:27:38.294242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:33.349 [2024-11-17 08:27:38.294277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.750 ms 00:27:33.349 [2024-11-17 08:27:38.294306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.349 [2024-11-17 08:27:38.319756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.349 [2024-11-17 08:27:38.319840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:33.349 [2024-11-17 08:27:38.319874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.377 ms 00:27:33.349 [2024-11-17 08:27:38.319885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.349 [2024-11-17 08:27:38.320569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.349 [2024-11-17 08:27:38.320594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:33.349 [2024-11-17 08:27:38.320610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:27:33.349 [2024-11-17 08:27:38.320623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.608 [2024-11-17 08:27:38.393027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.608 [2024-11-17 08:27:38.393103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:33.608 [2024-11-17 08:27:38.393143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.340 ms 00:27:33.608 [2024-11-17 08:27:38.393155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.608 [2024-11-17 08:27:38.419939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.608 [2024-11-17 08:27:38.420158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:33.608 [2024-11-17 08:27:38.420192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.686 ms 00:27:33.608 [2024-11-17 08:27:38.420205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.608 [2024-11-17 08:27:38.445355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.608 [2024-11-17 08:27:38.445392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:33.608 [2024-11-17 08:27:38.445409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.099 ms 00:27:33.608 [2024-11-17 08:27:38.445419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.608 [2024-11-17 08:27:38.470385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.608 [2024-11-17 08:27:38.470423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:33.608 [2024-11-17 08:27:38.470441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.922 ms 00:27:33.608 [2024-11-17 08:27:38.470451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.608 [2024-11-17 08:27:38.470501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.608 [2024-11-17 08:27:38.470518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:33.608 [2024-11-17 08:27:38.470533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:33.608 [2024-11-17 08:27:38.470543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.608 [2024-11-17 08:27:38.470631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.608 [2024-11-17 08:27:38.470650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:33.608 [2024-11-17 08:27:38.470662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:33.608 [2024-11-17 08:27:38.470672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.608 [2024-11-17 08:27:38.471960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2695.068 ms, result 0 00:27:33.608 { 00:27:33.608 "name": "ftl0", 00:27:33.608 "uuid": "ef9185a3-0142-4f78-a994-a1072823e554" 00:27:33.608 } 00:27:33.608 08:27:38 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:33.608 08:27:38 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:33.868 08:27:38 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:33.868 08:27:38 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:34.127 [2024-11-17 08:27:39.027223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.027498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.127 [2024-11-17 08:27:39.027529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.127 [2024-11-17 08:27:39.027556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.027601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:34.127 [2024-11-17 08:27:39.030402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.030432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.127 [2024-11-17 08:27:39.030447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:27:34.127 [2024-11-17 08:27:39.030457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.030692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.030711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.127 [2024-11-17 08:27:39.030724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:27:34.127 [2024-11-17 08:27:39.030733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.033551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.033694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:34.127 [2024-11-17 08:27:39.033831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.793 ms 00:27:34.127 [2024-11-17 08:27:39.033880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.039535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.039698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:34.127 [2024-11-17 08:27:39.039838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:27:34.127 [2024-11-17 08:27:39.039885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.064628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.064664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:34.127 [2024-11-17 08:27:39.064681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.560 ms 00:27:34.127 [2024-11-17 08:27:39.064692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.080126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.080165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:34.127 [2024-11-17 08:27:39.080183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.386 ms 00:27:34.127 [2024-11-17 08:27:39.080193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.080355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.080373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:34.127 [2024-11-17 08:27:39.080386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:27:34.127 [2024-11-17 08:27:39.080396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.105035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.105073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:34.127 [2024-11-17 08:27:39.105121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.611 ms 00:27:34.127 [2024-11-17 08:27:39.105132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.127 [2024-11-17 08:27:39.129138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.127 [2024-11-17 08:27:39.129176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:34.127 [2024-11-17 08:27:39.129209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.959 ms 00:27:34.128 [2024-11-17 08:27:39.129219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.394 [2024-11-17 08:27:39.154639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.394 [2024-11-17 08:27:39.154675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:34.394 [2024-11-17 08:27:39.154692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.369 ms 00:27:34.394 [2024-11-17 08:27:39.154702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.394 [2024-11-17 08:27:39.178769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.394 [2024-11-17 08:27:39.178815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:34.394 [2024-11-17 08:27:39.178834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.980 ms 00:27:34.394 [2024-11-17 08:27:39.178844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.394 [2024-11-17 08:27:39.178888] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.394 [2024-11-17 08:27:39.178909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.178999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.394 [2024-11-17 08:27:39.179946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.179956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.179968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.179978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.179990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.395 [2024-11-17 08:27:39.180261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.395 [2024-11-17 08:27:39.180273] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef9185a3-0142-4f78-a994-a1072823e554 00:27:34.395 [2024-11-17 08:27:39.180283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:34.395 [2024-11-17 08:27:39.180296] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:34.395 [2024-11-17 08:27:39.180309] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:34.395 [2024-11-17 08:27:39.180320] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:34.395 [2024-11-17 08:27:39.180330] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.395 [2024-11-17 08:27:39.180342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.395 [2024-11-17 08:27:39.180351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.395 [2024-11-17 08:27:39.180361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.395 [2024-11-17 08:27:39.180370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.395 [2024-11-17 08:27:39.180382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.395 [2024-11-17 08:27:39.180393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.395 [2024-11-17 08:27:39.180425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:27:34.395 [2024-11-17 08:27:39.180437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.193939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.395 [2024-11-17 08:27:39.194116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:34.395 [2024-11-17 08:27:39.194262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.452 ms 00:27:34.395 [2024-11-17 08:27:39.194312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.194755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.395 [2024-11-17 08:27:39.194880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:34.395 [2024-11-17 08:27:39.195006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:27:34.395 [2024-11-17 08:27:39.195052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.237424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.237582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.395 [2024-11-17 08:27:39.237693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.237802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.237904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.238008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.395 [2024-11-17 08:27:39.238150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.238270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.238415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.238488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.395 [2024-11-17 08:27:39.238590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.238636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.238767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.238809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.395 [2024-11-17 08:27:39.238847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.238883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.317386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.317632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.395 [2024-11-17 08:27:39.317773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.317796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.382740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.382786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.395 [2024-11-17 08:27:39.382804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.382818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.382929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.382945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.395 [2024-11-17 08:27:39.382957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.382967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.383027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.383042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.395 [2024-11-17 08:27:39.383054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.383063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.383220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.383240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.395 [2024-11-17 08:27:39.383254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.383264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.383319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.383366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.395 [2024-11-17 08:27:39.383379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.383389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.383437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.383485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.395 [2024-11-17 08:27:39.383500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.383511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.383570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.395 [2024-11-17 08:27:39.383586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.395 [2024-11-17 08:27:39.383599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.395 [2024-11-17 08:27:39.383610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.395 [2024-11-17 08:27:39.383785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.513 ms, result 0 00:27:34.395 true 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 81252 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81252 ']' 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81252 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81252 00:27:34.655 killing process with pid 81252 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81252' 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 81252 00:27:34.655 08:27:39 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 81252 00:27:38.903 08:27:43 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:43.093 262144+0 records in 00:27:43.093 262144+0 records out 00:27:43.093 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.7674 s, 285 MB/s 00:27:43.093 08:27:47 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:44.470 08:27:49 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:44.470 [2024-11-17 08:27:49.403276] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:44.470 [2024-11-17 08:27:49.403411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81467 ] 00:27:44.729 [2024-11-17 08:27:49.575232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.729 [2024-11-17 08:27:49.692676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.988 [2024-11-17 08:27:49.946208] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:44.988 [2024-11-17 08:27:49.946295] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:45.249 [2024-11-17 08:27:50.105189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.105243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:45.249 [2024-11-17 08:27:50.105266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:45.249 [2024-11-17 08:27:50.105275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.105329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.105344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:45.249 [2024-11-17 08:27:50.105357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:45.249 [2024-11-17 08:27:50.105366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.105391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:45.249 [2024-11-17 08:27:50.106143] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:45.249 [2024-11-17 08:27:50.106168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.106178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:45.249 [2024-11-17 08:27:50.106189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:27:45.249 [2024-11-17 08:27:50.106198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.107240] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:45.249 [2024-11-17 08:27:50.119867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.119904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:45.249 [2024-11-17 08:27:50.119918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.629 ms 00:27:45.249 [2024-11-17 08:27:50.119927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.119989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.120005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:45.249 [2024-11-17 08:27:50.120015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:45.249 [2024-11-17 08:27:50.120039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.124138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.124171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:45.249 [2024-11-17 08:27:50.124183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.980 ms 00:27:45.249 [2024-11-17 08:27:50.124191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.124266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.124281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:45.249 [2024-11-17 08:27:50.124291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:45.249 [2024-11-17 08:27:50.124300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.124351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.124366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:45.249 [2024-11-17 08:27:50.124376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:45.249 [2024-11-17 08:27:50.124385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.124412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:45.249 [2024-11-17 08:27:50.127994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.128026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:45.249 [2024-11-17 08:27:50.128039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:27:45.249 [2024-11-17 08:27:50.128052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.128112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.128127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:45.249 [2024-11-17 08:27:50.128138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:45.249 [2024-11-17 08:27:50.128146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.128172] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:45.249 [2024-11-17 08:27:50.128212] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:45.249 [2024-11-17 08:27:50.128248] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:45.249 [2024-11-17 08:27:50.128267] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:45.249 [2024-11-17 08:27:50.128361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:45.249 [2024-11-17 08:27:50.128374] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:45.249 [2024-11-17 08:27:50.128401] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:45.249 [2024-11-17 08:27:50.128444] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:45.249 [2024-11-17 08:27:50.128456] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:45.249 [2024-11-17 08:27:50.128467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:45.249 [2024-11-17 08:27:50.128493] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:45.249 [2024-11-17 08:27:50.128503] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:45.249 [2024-11-17 08:27:50.128527] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:45.249 [2024-11-17 08:27:50.128542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.128552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:45.249 [2024-11-17 08:27:50.128563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:27:45.249 [2024-11-17 08:27:50.128572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.128656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.249 [2024-11-17 08:27:50.128909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:45.249 [2024-11-17 08:27:50.128932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:45.249 [2024-11-17 08:27:50.128958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.249 [2024-11-17 08:27:50.129147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:45.249 [2024-11-17 08:27:50.129189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:45.249 [2024-11-17 08:27:50.129201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.249 [2024-11-17 08:27:50.129212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.249 [2024-11-17 08:27:50.129222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:45.249 [2024-11-17 08:27:50.129231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:45.249 [2024-11-17 08:27:50.129240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:45.249 [2024-11-17 08:27:50.129249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:45.249 [2024-11-17 08:27:50.129258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:45.249 [2024-11-17 08:27:50.129268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.249 [2024-11-17 08:27:50.129277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:45.249 [2024-11-17 08:27:50.129286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:45.249 [2024-11-17 08:27:50.129294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:45.249 [2024-11-17 08:27:50.129303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:45.249 [2024-11-17 08:27:50.129313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:45.250 [2024-11-17 08:27:50.129332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:45.250 [2024-11-17 08:27:50.129351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:45.250 [2024-11-17 08:27:50.129361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:45.250 [2024-11-17 08:27:50.129380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.250 [2024-11-17 08:27:50.129411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:45.250 [2024-11-17 08:27:50.129419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.250 [2024-11-17 08:27:50.129453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:45.250 [2024-11-17 08:27:50.129476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.250 [2024-11-17 08:27:50.129492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:45.250 [2024-11-17 08:27:50.129501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:45.250 [2024-11-17 08:27:50.129532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:45.250 [2024-11-17 08:27:50.129541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.250 [2024-11-17 08:27:50.129558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:45.250 [2024-11-17 08:27:50.129567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:45.250 [2024-11-17 08:27:50.129575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:45.250 [2024-11-17 08:27:50.129583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:45.250 [2024-11-17 08:27:50.129591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:45.250 [2024-11-17 08:27:50.129599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:45.250 [2024-11-17 08:27:50.129615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:45.250 [2024-11-17 08:27:50.129624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129632] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:45.250 [2024-11-17 08:27:50.129641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:45.250 [2024-11-17 08:27:50.129649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:45.250 [2024-11-17 08:27:50.129658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:45.250 [2024-11-17 08:27:50.129667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:45.250 [2024-11-17 08:27:50.129675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:45.250 [2024-11-17 08:27:50.129684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:45.250 [2024-11-17 08:27:50.129693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:45.250 [2024-11-17 08:27:50.129701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:45.250 [2024-11-17 08:27:50.129709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:45.250 [2024-11-17 08:27:50.129719] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:45.250 [2024-11-17 08:27:50.129731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.250 [2024-11-17 08:27:50.129741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:45.250 [2024-11-17 08:27:50.129751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:45.250 [2024-11-17 08:27:50.129760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:45.250 [2024-11-17 08:27:50.129784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:45.250 [2024-11-17 08:27:50.129793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:45.250 [2024-11-17 08:27:50.129801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:45.250 [2024-11-17 08:27:50.129810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:45.250 [2024-11-17 08:27:50.129819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:45.250 [2024-11-17 08:27:50.129828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:45.250 [2024-11-17 08:27:50.129838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:45.250 [2024-11-17 08:27:50.129847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:45.250 [2024-11-17 08:27:50.129856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:45.250 [2024-11-17 08:27:50.129865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:45.250 [2024-11-17 08:27:50.129874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:45.250 [2024-11-17 08:27:50.129883] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:45.250 [2024-11-17 08:27:50.129897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:45.250 [2024-11-17 08:27:50.129907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:45.250 [2024-11-17 08:27:50.129916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:45.250 [2024-11-17 08:27:50.129925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:45.250 [2024-11-17 08:27:50.129934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:45.250 [2024-11-17 08:27:50.129944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.129953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:45.250 [2024-11-17 08:27:50.129962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:27:45.250 [2024-11-17 08:27:50.129970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.156193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.156243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:45.250 [2024-11-17 08:27:50.156258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.160 ms 00:27:45.250 [2024-11-17 08:27:50.156268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.156362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.156375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:45.250 [2024-11-17 08:27:50.156385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:45.250 [2024-11-17 08:27:50.156393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.199548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.199599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:45.250 [2024-11-17 08:27:50.199616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.081 ms 00:27:45.250 [2024-11-17 08:27:50.199642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.199696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.199711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:45.250 [2024-11-17 08:27:50.199723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:45.250 [2024-11-17 08:27:50.199753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.200176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.200195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:45.250 [2024-11-17 08:27:50.200207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:27:45.250 [2024-11-17 08:27:50.200216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.200370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.200387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:45.250 [2024-11-17 08:27:50.200398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:27:45.250 [2024-11-17 08:27:50.200413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.214214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.214250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:45.250 [2024-11-17 08:27:50.214285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.777 ms 00:27:45.250 [2024-11-17 08:27:50.214294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.227617] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:45.250 [2024-11-17 08:27:50.227849] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:45.250 [2024-11-17 08:27:50.227871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.250 [2024-11-17 08:27:50.227882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:45.250 [2024-11-17 08:27:50.227894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.472 ms 00:27:45.250 [2024-11-17 08:27:50.227904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.250 [2024-11-17 08:27:50.251400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.251 [2024-11-17 08:27:50.251436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:45.251 [2024-11-17 08:27:50.251479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.447 ms 00:27:45.251 [2024-11-17 08:27:50.251490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.510 [2024-11-17 08:27:50.265383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.510 [2024-11-17 08:27:50.265426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:45.510 [2024-11-17 08:27:50.265456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.850 ms 00:27:45.510 [2024-11-17 08:27:50.265464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.510 [2024-11-17 08:27:50.278094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.510 [2024-11-17 08:27:50.278128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:45.510 [2024-11-17 08:27:50.278141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.576 ms 00:27:45.510 [2024-11-17 08:27:50.278149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.510 [2024-11-17 08:27:50.278759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.510 [2024-11-17 08:27:50.278798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:45.510 [2024-11-17 08:27:50.278825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:27:45.510 [2024-11-17 08:27:50.278849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.510 [2024-11-17 08:27:50.336102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.510 [2024-11-17 08:27:50.336426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:45.510 [2024-11-17 08:27:50.336455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.228 ms 00:27:45.511 [2024-11-17 08:27:50.336475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.346445] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:45.511 [2024-11-17 08:27:50.348479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.511 [2024-11-17 08:27:50.348509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:45.511 [2024-11-17 08:27:50.348522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.947 ms 00:27:45.511 [2024-11-17 08:27:50.348532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.348640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.511 [2024-11-17 08:27:50.348657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:45.511 [2024-11-17 08:27:50.348668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:45.511 [2024-11-17 08:27:50.348677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.348761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.511 [2024-11-17 08:27:50.348777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:45.511 [2024-11-17 08:27:50.348787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:45.511 [2024-11-17 08:27:50.348796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.348818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.511 [2024-11-17 08:27:50.348828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:45.511 [2024-11-17 08:27:50.348838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:45.511 [2024-11-17 08:27:50.348846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.348882] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:45.511 [2024-11-17 08:27:50.348896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.511 [2024-11-17 08:27:50.348908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:45.511 [2024-11-17 08:27:50.348917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:45.511 [2024-11-17 08:27:50.348926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.373666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.511 [2024-11-17 08:27:50.373703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:45.511 [2024-11-17 08:27:50.373717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.721 ms 00:27:45.511 [2024-11-17 08:27:50.373726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.373802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:45.511 [2024-11-17 08:27:50.373817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:45.511 [2024-11-17 08:27:50.373827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:45.511 [2024-11-17 08:27:50.373835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:45.511 [2024-11-17 08:27:50.375189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 269.443 ms, result 0 00:27:46.448  [2024-11-17T08:27:52.398Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-17T08:27:53.774Z] Copying: 47/1024 [MB] (23 MBps) [2024-11-17T08:27:54.712Z] Copying: 70/1024 [MB] (23 MBps) [2024-11-17T08:27:55.650Z] Copying: 94/1024 [MB] (23 MBps) [2024-11-17T08:27:56.587Z] Copying: 118/1024 [MB] (23 MBps) [2024-11-17T08:27:57.526Z] Copying: 142/1024 [MB] (24 MBps) [2024-11-17T08:27:58.463Z] Copying: 165/1024 [MB] (23 MBps) [2024-11-17T08:27:59.400Z] Copying: 189/1024 [MB] (23 MBps) [2024-11-17T08:28:00.777Z] Copying: 213/1024 [MB] (23 MBps) [2024-11-17T08:28:01.715Z] Copying: 238/1024 [MB] (24 MBps) [2024-11-17T08:28:02.652Z] Copying: 262/1024 [MB] (24 MBps) [2024-11-17T08:28:03.590Z] Copying: 286/1024 [MB] (23 MBps) [2024-11-17T08:28:04.528Z] Copying: 310/1024 [MB] (24 MBps) [2024-11-17T08:28:05.466Z] Copying: 334/1024 [MB] (24 MBps) [2024-11-17T08:28:06.403Z] Copying: 359/1024 [MB] (24 MBps) [2024-11-17T08:28:07.782Z] Copying: 383/1024 [MB] (24 MBps) [2024-11-17T08:28:08.720Z] Copying: 407/1024 [MB] (24 MBps) [2024-11-17T08:28:09.657Z] Copying: 431/1024 [MB] (24 MBps) [2024-11-17T08:28:10.595Z] Copying: 455/1024 [MB] (24 MBps) [2024-11-17T08:28:11.534Z] Copying: 479/1024 [MB] (24 MBps) [2024-11-17T08:28:12.472Z] Copying: 504/1024 [MB] (24 MBps) [2024-11-17T08:28:13.411Z] Copying: 528/1024 [MB] (24 MBps) [2024-11-17T08:28:14.791Z] Copying: 553/1024 [MB] (24 MBps) [2024-11-17T08:28:15.729Z] Copying: 577/1024 [MB] (24 MBps) [2024-11-17T08:28:16.678Z] Copying: 601/1024 [MB] (24 MBps) [2024-11-17T08:28:17.666Z] Copying: 626/1024 [MB] (24 MBps) [2024-11-17T08:28:18.603Z] Copying: 650/1024 [MB] (24 MBps) [2024-11-17T08:28:19.540Z] Copying: 674/1024 [MB] (24 MBps) [2024-11-17T08:28:20.479Z] Copying: 698/1024 [MB] (24 MBps) [2024-11-17T08:28:21.417Z] Copying: 722/1024 [MB] (24 MBps) [2024-11-17T08:28:22.795Z] Copying: 746/1024 [MB] (24 MBps) [2024-11-17T08:28:23.733Z] Copying: 770/1024 [MB] (24 MBps) [2024-11-17T08:28:24.671Z] Copying: 794/1024 [MB] (23 MBps) [2024-11-17T08:28:25.643Z] Copying: 816/1024 [MB] (22 MBps) [2024-11-17T08:28:26.575Z] Copying: 840/1024 [MB] (23 MBps) [2024-11-17T08:28:27.508Z] Copying: 863/1024 [MB] (23 MBps) [2024-11-17T08:28:28.442Z] Copying: 887/1024 [MB] (23 MBps) [2024-11-17T08:28:29.819Z] Copying: 911/1024 [MB] (23 MBps) [2024-11-17T08:28:30.755Z] Copying: 934/1024 [MB] (23 MBps) [2024-11-17T08:28:31.693Z] Copying: 957/1024 [MB] (23 MBps) [2024-11-17T08:28:32.629Z] Copying: 981/1024 [MB] (23 MBps) [2024-11-17T08:28:33.568Z] Copying: 1004/1024 [MB] (23 MBps) [2024-11-17T08:28:33.568Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-17 08:28:33.213884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.556 [2024-11-17 08:28:33.213939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:28.556 [2024-11-17 08:28:33.213958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:28.556 [2024-11-17 08:28:33.213969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.556 [2024-11-17 08:28:33.214011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.556 [2024-11-17 08:28:33.217116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.556 [2024-11-17 08:28:33.217149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:28.556 [2024-11-17 08:28:33.217162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:28:28.556 [2024-11-17 08:28:33.217171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.556 [2024-11-17 08:28:33.218957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.556 [2024-11-17 08:28:33.218995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:28.556 [2024-11-17 08:28:33.219023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:28:28.556 [2024-11-17 08:28:33.219032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.556 [2024-11-17 08:28:33.219062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.556 [2024-11-17 08:28:33.219076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:28.556 [2024-11-17 08:28:33.219104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.556 [2024-11-17 08:28:33.219124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.556 [2024-11-17 08:28:33.219198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.556 [2024-11-17 08:28:33.219244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:28.556 [2024-11-17 08:28:33.219282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:28.556 [2024-11-17 08:28:33.219300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.556 [2024-11-17 08:28:33.219334] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:28.556 [2024-11-17 08:28:33.219371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.219996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.220006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.220015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.220025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:28.556 [2024-11-17 08:28:33.220034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.220924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:28.557 [2024-11-17 08:28:33.221520] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:28.557 [2024-11-17 08:28:33.221530] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef9185a3-0142-4f78-a994-a1072823e554 00:28:28.557 [2024-11-17 08:28:33.221539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:28.557 [2024-11-17 08:28:33.221549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:28.557 [2024-11-17 08:28:33.221559] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:28.557 [2024-11-17 08:28:33.221569] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:28.557 [2024-11-17 08:28:33.221584] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:28.557 [2024-11-17 08:28:33.221594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:28.557 [2024-11-17 08:28:33.221603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:28.557 [2024-11-17 08:28:33.221612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:28.557 [2024-11-17 08:28:33.221620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:28.557 [2024-11-17 08:28:33.221630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.557 [2024-11-17 08:28:33.221640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:28.557 [2024-11-17 08:28:33.221651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:28:28.557 [2024-11-17 08:28:33.221660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.234810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.557 [2024-11-17 08:28:33.234847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:28.557 [2024-11-17 08:28:33.234868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.124 ms 00:28:28.557 [2024-11-17 08:28:33.234877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.235294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.557 [2024-11-17 08:28:33.235363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:28.557 [2024-11-17 08:28:33.235382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:28:28.557 [2024-11-17 08:28:33.235392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.269475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.269522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.557 [2024-11-17 08:28:33.269553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.269562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.269614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.269628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.557 [2024-11-17 08:28:33.269638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.269646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.269729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.269749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.557 [2024-11-17 08:28:33.269766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.269790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.269810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.269821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.557 [2024-11-17 08:28:33.269831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.269844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.348052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.348339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.557 [2024-11-17 08:28:33.348375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.348386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.413472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.413521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.557 [2024-11-17 08:28:33.413537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.413546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.413613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.413627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.557 [2024-11-17 08:28:33.413637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.413653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.557 [2024-11-17 08:28:33.413713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.557 [2024-11-17 08:28:33.413729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.557 [2024-11-17 08:28:33.413738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.557 [2024-11-17 08:28:33.413747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.558 [2024-11-17 08:28:33.413824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.558 [2024-11-17 08:28:33.413840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.558 [2024-11-17 08:28:33.413850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.558 [2024-11-17 08:28:33.413859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.558 [2024-11-17 08:28:33.413905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.558 [2024-11-17 08:28:33.413920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.558 [2024-11-17 08:28:33.413929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.558 [2024-11-17 08:28:33.413938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.558 [2024-11-17 08:28:33.413975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.558 [2024-11-17 08:28:33.413987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.558 [2024-11-17 08:28:33.413996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.558 [2024-11-17 08:28:33.414004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.558 [2024-11-17 08:28:33.414052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.558 [2024-11-17 08:28:33.414066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.558 [2024-11-17 08:28:33.414075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.558 [2024-11-17 08:28:33.414129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.558 [2024-11-17 08:28:33.414364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 200.374 ms, result 0 00:28:29.494 00:28:29.494 00:28:29.494 08:28:34 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:29.494 [2024-11-17 08:28:34.408918] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:29.494 [2024-11-17 08:28:34.409418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81903 ] 00:28:29.753 [2024-11-17 08:28:34.589105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.753 [2024-11-17 08:28:34.666713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.012 [2024-11-17 08:28:34.922926] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.012 [2024-11-17 08:28:34.923000] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:30.274 [2024-11-17 08:28:35.077955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.077999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:30.274 [2024-11-17 08:28:35.078024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:30.274 [2024-11-17 08:28:35.078034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.078137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.078155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:30.274 [2024-11-17 08:28:35.078171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:30.274 [2024-11-17 08:28:35.078181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.078225] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:30.274 [2024-11-17 08:28:35.079154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:30.274 [2024-11-17 08:28:35.079373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.079395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:30.274 [2024-11-17 08:28:35.079408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:28:30.274 [2024-11-17 08:28:35.079418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.079926] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:30.274 [2024-11-17 08:28:35.079957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.079968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:30.274 [2024-11-17 08:28:35.079986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:30.274 [2024-11-17 08:28:35.079995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.080073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.080088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:30.274 [2024-11-17 08:28:35.080099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:30.274 [2024-11-17 08:28:35.080136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.080535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.080576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:30.274 [2024-11-17 08:28:35.080589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:28:30.274 [2024-11-17 08:28:35.080599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.080672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.080690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:30.274 [2024-11-17 08:28:35.080701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:30.274 [2024-11-17 08:28:35.080711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.080740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.080753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:30.274 [2024-11-17 08:28:35.080764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:30.274 [2024-11-17 08:28:35.080778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.080805] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:30.274 [2024-11-17 08:28:35.084572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.084608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:30.274 [2024-11-17 08:28:35.084621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:28:30.274 [2024-11-17 08:28:35.084631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.084662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.084675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:30.274 [2024-11-17 08:28:35.084684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:30.274 [2024-11-17 08:28:35.084693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.084743] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:30.274 [2024-11-17 08:28:35.084771] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:30.274 [2024-11-17 08:28:35.084808] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:30.274 [2024-11-17 08:28:35.084825] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:30.274 [2024-11-17 08:28:35.084910] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:30.274 [2024-11-17 08:28:35.084923] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:30.274 [2024-11-17 08:28:35.084935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:30.274 [2024-11-17 08:28:35.084947] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:30.274 [2024-11-17 08:28:35.084957] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:30.274 [2024-11-17 08:28:35.084967] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:30.274 [2024-11-17 08:28:35.084980] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:30.274 [2024-11-17 08:28:35.084989] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:30.274 [2024-11-17 08:28:35.084997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:30.274 [2024-11-17 08:28:35.085007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.085016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:30.274 [2024-11-17 08:28:35.085026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:28:30.274 [2024-11-17 08:28:35.085036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.085141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.274 [2024-11-17 08:28:35.085164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:30.274 [2024-11-17 08:28:35.085174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:30.274 [2024-11-17 08:28:35.085188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.274 [2024-11-17 08:28:35.085286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:30.274 [2024-11-17 08:28:35.085303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:30.274 [2024-11-17 08:28:35.085314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.274 [2024-11-17 08:28:35.085324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.274 [2024-11-17 08:28:35.085334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:30.274 [2024-11-17 08:28:35.085343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:30.274 [2024-11-17 08:28:35.085353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:30.274 [2024-11-17 08:28:35.085363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:30.274 [2024-11-17 08:28:35.085372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:30.274 [2024-11-17 08:28:35.085381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.274 [2024-11-17 08:28:35.085391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:30.274 [2024-11-17 08:28:35.085400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:30.274 [2024-11-17 08:28:35.085409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:30.274 [2024-11-17 08:28:35.085418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:30.274 [2024-11-17 08:28:35.085427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:30.274 [2024-11-17 08:28:35.085436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:30.275 [2024-11-17 08:28:35.085465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:30.275 [2024-11-17 08:28:35.085490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:30.275 [2024-11-17 08:28:35.085507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.275 [2024-11-17 08:28:35.085525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:30.275 [2024-11-17 08:28:35.085533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.275 [2024-11-17 08:28:35.085550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:30.275 [2024-11-17 08:28:35.085558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.275 [2024-11-17 08:28:35.085575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:30.275 [2024-11-17 08:28:35.085584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:30.275 [2024-11-17 08:28:35.085600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:30.275 [2024-11-17 08:28:35.085609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.275 [2024-11-17 08:28:35.085626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:30.275 [2024-11-17 08:28:35.085634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:30.275 [2024-11-17 08:28:35.085643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:30.275 [2024-11-17 08:28:35.085651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:30.275 [2024-11-17 08:28:35.085659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:30.275 [2024-11-17 08:28:35.085668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:30.275 [2024-11-17 08:28:35.085685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:30.275 [2024-11-17 08:28:35.085694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085703] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:30.275 [2024-11-17 08:28:35.085712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:30.275 [2024-11-17 08:28:35.085721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:30.275 [2024-11-17 08:28:35.085730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:30.275 [2024-11-17 08:28:35.085739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:30.275 [2024-11-17 08:28:35.085749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:30.275 [2024-11-17 08:28:35.085758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:30.275 [2024-11-17 08:28:35.085766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:30.275 [2024-11-17 08:28:35.085774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:30.275 [2024-11-17 08:28:35.085783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:30.275 [2024-11-17 08:28:35.085793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:30.275 [2024-11-17 08:28:35.085809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.275 [2024-11-17 08:28:35.085819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:30.275 [2024-11-17 08:28:35.085828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:30.275 [2024-11-17 08:28:35.085837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:30.275 [2024-11-17 08:28:35.085847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:30.275 [2024-11-17 08:28:35.085855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:30.275 [2024-11-17 08:28:35.085865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:30.275 [2024-11-17 08:28:35.085874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:30.275 [2024-11-17 08:28:35.085883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:30.275 [2024-11-17 08:28:35.085892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:30.275 [2024-11-17 08:28:35.085901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:30.275 [2024-11-17 08:28:35.085911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:30.275 [2024-11-17 08:28:35.085920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:30.275 [2024-11-17 08:28:35.085929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:30.275 [2024-11-17 08:28:35.085938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:30.275 [2024-11-17 08:28:35.085947] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:30.275 [2024-11-17 08:28:35.085958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:30.275 [2024-11-17 08:28:35.085968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:30.275 [2024-11-17 08:28:35.085978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:30.275 [2024-11-17 08:28:35.085987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:30.275 [2024-11-17 08:28:35.085997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:30.275 [2024-11-17 08:28:35.086006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.086016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:30.275 [2024-11-17 08:28:35.086025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:28:30.275 [2024-11-17 08:28:35.086034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.110466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.110512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:30.275 [2024-11-17 08:28:35.110528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.385 ms 00:28:30.275 [2024-11-17 08:28:35.110538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.110621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.110635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:30.275 [2024-11-17 08:28:35.110645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:30.275 [2024-11-17 08:28:35.110659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.157816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.158014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:30.275 [2024-11-17 08:28:35.158042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.081 ms 00:28:30.275 [2024-11-17 08:28:35.158055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.158152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.158171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:30.275 [2024-11-17 08:28:35.158183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:30.275 [2024-11-17 08:28:35.158194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.158338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.158381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:30.275 [2024-11-17 08:28:35.158393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:30.275 [2024-11-17 08:28:35.158403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.158554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.158574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:30.275 [2024-11-17 08:28:35.158585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:28:30.275 [2024-11-17 08:28:35.158595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.172135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.172174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:30.275 [2024-11-17 08:28:35.172190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.516 ms 00:28:30.275 [2024-11-17 08:28:35.172200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.172348] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:30.275 [2024-11-17 08:28:35.172367] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:30.275 [2024-11-17 08:28:35.172379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.172392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:30.275 [2024-11-17 08:28:35.172402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:30.275 [2024-11-17 08:28:35.172412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.275 [2024-11-17 08:28:35.182947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.275 [2024-11-17 08:28:35.182976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:30.276 [2024-11-17 08:28:35.182988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.517 ms 00:28:30.276 [2024-11-17 08:28:35.182998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.183124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.183140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:30.276 [2024-11-17 08:28:35.183151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:30.276 [2024-11-17 08:28:35.183181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.183277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.183294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:30.276 [2024-11-17 08:28:35.183307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:30.276 [2024-11-17 08:28:35.183317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.184116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.184182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:30.276 [2024-11-17 08:28:35.184199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:28:30.276 [2024-11-17 08:28:35.184210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.184237] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:30.276 [2024-11-17 08:28:35.184260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.184271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:30.276 [2024-11-17 08:28:35.184282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:30.276 [2024-11-17 08:28:35.184292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.194626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:30.276 [2024-11-17 08:28:35.194956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.194987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:30.276 [2024-11-17 08:28:35.195002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.627 ms 00:28:30.276 [2024-11-17 08:28:35.195013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.196965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.196997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:30.276 [2024-11-17 08:28:35.197010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:28:30.276 [2024-11-17 08:28:35.197020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.197139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.197158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:30.276 [2024-11-17 08:28:35.197169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:30.276 [2024-11-17 08:28:35.197179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.197210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.197223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:30.276 [2024-11-17 08:28:35.197240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:30.276 [2024-11-17 08:28:35.197249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.197284] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:30.276 [2024-11-17 08:28:35.197300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.197309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:30.276 [2024-11-17 08:28:35.197319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:30.276 [2024-11-17 08:28:35.197328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.222096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.222266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:30.276 [2024-11-17 08:28:35.222419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.747 ms 00:28:30.276 [2024-11-17 08:28:35.222480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.222589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:30.276 [2024-11-17 08:28:35.222670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:30.276 [2024-11-17 08:28:35.222716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:30.276 [2024-11-17 08:28:35.222752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:30.276 [2024-11-17 08:28:35.224000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.497 ms, result 0 00:28:31.654  [2024-11-17T08:28:37.603Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-17T08:28:38.540Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-17T08:28:39.478Z] Copying: 67/1024 [MB] (22 MBps) [2024-11-17T08:28:40.414Z] Copying: 89/1024 [MB] (22 MBps) [2024-11-17T08:28:41.792Z] Copying: 110/1024 [MB] (21 MBps) [2024-11-17T08:28:42.728Z] Copying: 132/1024 [MB] (21 MBps) [2024-11-17T08:28:43.675Z] Copying: 154/1024 [MB] (21 MBps) [2024-11-17T08:28:44.653Z] Copying: 175/1024 [MB] (21 MBps) [2024-11-17T08:28:45.589Z] Copying: 198/1024 [MB] (22 MBps) [2024-11-17T08:28:46.526Z] Copying: 220/1024 [MB] (22 MBps) [2024-11-17T08:28:47.462Z] Copying: 242/1024 [MB] (22 MBps) [2024-11-17T08:28:48.398Z] Copying: 264/1024 [MB] (22 MBps) [2024-11-17T08:28:49.775Z] Copying: 287/1024 [MB] (22 MBps) [2024-11-17T08:28:50.711Z] Copying: 310/1024 [MB] (22 MBps) [2024-11-17T08:28:51.646Z] Copying: 332/1024 [MB] (22 MBps) [2024-11-17T08:28:52.581Z] Copying: 354/1024 [MB] (22 MBps) [2024-11-17T08:28:53.518Z] Copying: 376/1024 [MB] (22 MBps) [2024-11-17T08:28:54.453Z] Copying: 399/1024 [MB] (22 MBps) [2024-11-17T08:28:55.837Z] Copying: 421/1024 [MB] (22 MBps) [2024-11-17T08:28:56.407Z] Copying: 444/1024 [MB] (22 MBps) [2024-11-17T08:28:57.785Z] Copying: 466/1024 [MB] (22 MBps) [2024-11-17T08:28:58.722Z] Copying: 489/1024 [MB] (22 MBps) [2024-11-17T08:28:59.659Z] Copying: 511/1024 [MB] (22 MBps) [2024-11-17T08:29:00.596Z] Copying: 533/1024 [MB] (21 MBps) [2024-11-17T08:29:01.534Z] Copying: 555/1024 [MB] (22 MBps) [2024-11-17T08:29:02.470Z] Copying: 577/1024 [MB] (22 MBps) [2024-11-17T08:29:03.407Z] Copying: 599/1024 [MB] (22 MBps) [2024-11-17T08:29:04.784Z] Copying: 621/1024 [MB] (22 MBps) [2024-11-17T08:29:05.720Z] Copying: 644/1024 [MB] (22 MBps) [2024-11-17T08:29:06.657Z] Copying: 666/1024 [MB] (22 MBps) [2024-11-17T08:29:07.594Z] Copying: 689/1024 [MB] (22 MBps) [2024-11-17T08:29:08.529Z] Copying: 711/1024 [MB] (22 MBps) [2024-11-17T08:29:09.463Z] Copying: 734/1024 [MB] (22 MBps) [2024-11-17T08:29:10.399Z] Copying: 756/1024 [MB] (22 MBps) [2024-11-17T08:29:11.776Z] Copying: 779/1024 [MB] (22 MBps) [2024-11-17T08:29:12.713Z] Copying: 801/1024 [MB] (21 MBps) [2024-11-17T08:29:13.682Z] Copying: 823/1024 [MB] (21 MBps) [2024-11-17T08:29:14.619Z] Copying: 845/1024 [MB] (22 MBps) [2024-11-17T08:29:15.555Z] Copying: 868/1024 [MB] (22 MBps) [2024-11-17T08:29:16.492Z] Copying: 891/1024 [MB] (22 MBps) [2024-11-17T08:29:17.429Z] Copying: 913/1024 [MB] (22 MBps) [2024-11-17T08:29:18.806Z] Copying: 936/1024 [MB] (22 MBps) [2024-11-17T08:29:19.745Z] Copying: 959/1024 [MB] (22 MBps) [2024-11-17T08:29:20.683Z] Copying: 981/1024 [MB] (22 MBps) [2024-11-17T08:29:21.622Z] Copying: 1003/1024 [MB] (22 MBps) [2024-11-17T08:29:21.622Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-17 08:29:21.498486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.610 [2024-11-17 08:29:21.498781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:16.610 [2024-11-17 08:29:21.498847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.610 [2024-11-17 08:29:21.498883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.610 [2024-11-17 08:29:21.498960] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:16.610 [2024-11-17 08:29:21.504008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.610 [2024-11-17 08:29:21.504241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:16.610 [2024-11-17 08:29:21.504462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:29:16.610 [2024-11-17 08:29:21.504685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.610 [2024-11-17 08:29:21.505342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.610 [2024-11-17 08:29:21.505584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:16.610 [2024-11-17 08:29:21.505808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:29:16.610 [2024-11-17 08:29:21.506020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.610 [2024-11-17 08:29:21.506301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.610 [2024-11-17 08:29:21.506362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:16.610 [2024-11-17 08:29:21.506395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:16.610 [2024-11-17 08:29:21.506418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.610 [2024-11-17 08:29:21.506499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.610 [2024-11-17 08:29:21.506519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:16.610 [2024-11-17 08:29:21.506536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:16.610 [2024-11-17 08:29:21.506551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.610 [2024-11-17 08:29:21.506576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:16.610 [2024-11-17 08:29:21.506597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:16.610 [2024-11-17 08:29:21.506740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.506987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.507999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:16.611 [2024-11-17 08:29:21.508122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:16.612 [2024-11-17 08:29:21.508132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:16.612 [2024-11-17 08:29:21.508142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:16.612 [2024-11-17 08:29:21.508152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:16.612 [2024-11-17 08:29:21.508162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:16.612 [2024-11-17 08:29:21.508173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:16.612 [2024-11-17 08:29:21.508190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:16.612 [2024-11-17 08:29:21.508201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef9185a3-0142-4f78-a994-a1072823e554 00:29:16.612 [2024-11-17 08:29:21.508216] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:16.612 [2024-11-17 08:29:21.508225] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:16.612 [2024-11-17 08:29:21.508234] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:16.612 [2024-11-17 08:29:21.508244] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:16.612 [2024-11-17 08:29:21.508254] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:16.612 [2024-11-17 08:29:21.508264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:16.612 [2024-11-17 08:29:21.508273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:16.612 [2024-11-17 08:29:21.508282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:16.612 [2024-11-17 08:29:21.508291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:16.612 [2024-11-17 08:29:21.508300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.612 [2024-11-17 08:29:21.508310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:16.612 [2024-11-17 08:29:21.508322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:29:16.612 [2024-11-17 08:29:21.508332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.612 [2024-11-17 08:29:21.522397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.612 [2024-11-17 08:29:21.522434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:16.612 [2024-11-17 08:29:21.522466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.041 ms 00:29:16.612 [2024-11-17 08:29:21.522477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.612 [2024-11-17 08:29:21.522847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.612 [2024-11-17 08:29:21.522871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:16.612 [2024-11-17 08:29:21.522884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:29:16.612 [2024-11-17 08:29:21.522901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.612 [2024-11-17 08:29:21.557564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.612 [2024-11-17 08:29:21.557726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.612 [2024-11-17 08:29:21.557855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.612 [2024-11-17 08:29:21.557983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.612 [2024-11-17 08:29:21.558148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.612 [2024-11-17 08:29:21.558228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.612 [2024-11-17 08:29:21.558380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.612 [2024-11-17 08:29:21.558486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.612 [2024-11-17 08:29:21.558637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.612 [2024-11-17 08:29:21.558737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.612 [2024-11-17 08:29:21.558858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.612 [2024-11-17 08:29:21.559001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.612 [2024-11-17 08:29:21.559172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.612 [2024-11-17 08:29:21.559315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.612 [2024-11-17 08:29:21.559477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.612 [2024-11-17 08:29:21.559572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.871 [2024-11-17 08:29:21.639621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.871 [2024-11-17 08:29:21.639864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.871 [2024-11-17 08:29:21.640005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.871 [2024-11-17 08:29:21.640178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.871 [2024-11-17 08:29:21.704973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.871 [2024-11-17 08:29:21.705224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.871 [2024-11-17 08:29:21.705271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.872 [2024-11-17 08:29:21.705293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.872 [2024-11-17 08:29:21.705445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.872 [2024-11-17 08:29:21.705464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.872 [2024-11-17 08:29:21.705476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.872 [2024-11-17 08:29:21.705486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.872 [2024-11-17 08:29:21.705578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.872 [2024-11-17 08:29:21.705592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.872 [2024-11-17 08:29:21.705602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.872 [2024-11-17 08:29:21.705611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.872 [2024-11-17 08:29:21.705702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.872 [2024-11-17 08:29:21.705720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.872 [2024-11-17 08:29:21.705731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.872 [2024-11-17 08:29:21.705740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.872 [2024-11-17 08:29:21.705773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.872 [2024-11-17 08:29:21.705788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:16.872 [2024-11-17 08:29:21.705799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.872 [2024-11-17 08:29:21.705808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.872 [2024-11-17 08:29:21.705847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.872 [2024-11-17 08:29:21.705864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.872 [2024-11-17 08:29:21.705874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.872 [2024-11-17 08:29:21.705884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.872 [2024-11-17 08:29:21.705927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.872 [2024-11-17 08:29:21.705941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.872 [2024-11-17 08:29:21.705951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.872 [2024-11-17 08:29:21.705960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.872 [2024-11-17 08:29:21.706081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 207.585 ms, result 0 00:29:17.440 00:29:17.440 00:29:17.440 08:29:22 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:19.346 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:19.346 08:29:24 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:19.346 [2024-11-17 08:29:24.299623] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:19.346 [2024-11-17 08:29:24.299790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82378 ] 00:29:19.605 [2024-11-17 08:29:24.483182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.605 [2024-11-17 08:29:24.607664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.864 [2024-11-17 08:29:24.864714] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.864 [2024-11-17 08:29:24.864803] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:20.125 [2024-11-17 08:29:25.020738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.125 [2024-11-17 08:29:25.020785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:20.125 [2024-11-17 08:29:25.020823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:20.125 [2024-11-17 08:29:25.020833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.125 [2024-11-17 08:29:25.020888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.125 [2024-11-17 08:29:25.020905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:20.125 [2024-11-17 08:29:25.020919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:20.125 [2024-11-17 08:29:25.020928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.020953] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:20.126 [2024-11-17 08:29:25.021873] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:20.126 [2024-11-17 08:29:25.021921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.021932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:20.126 [2024-11-17 08:29:25.021943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:29:20.126 [2024-11-17 08:29:25.021952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.022429] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:20.126 [2024-11-17 08:29:25.022479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.022492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:20.126 [2024-11-17 08:29:25.022509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:20.126 [2024-11-17 08:29:25.022534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.022586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.022601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:20.126 [2024-11-17 08:29:25.022612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:20.126 [2024-11-17 08:29:25.022621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.023163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.023211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:20.126 [2024-11-17 08:29:25.023225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:29:20.126 [2024-11-17 08:29:25.023249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.023324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.023341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:20.126 [2024-11-17 08:29:25.023352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:20.126 [2024-11-17 08:29:25.023361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.023392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.023405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:20.126 [2024-11-17 08:29:25.023415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:20.126 [2024-11-17 08:29:25.023430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.023520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:20.126 [2024-11-17 08:29:25.027382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.027421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:20.126 [2024-11-17 08:29:25.027435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:29:20.126 [2024-11-17 08:29:25.027444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.027519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.027547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:20.126 [2024-11-17 08:29:25.027563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:20.126 [2024-11-17 08:29:25.027573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.027641] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:20.126 [2024-11-17 08:29:25.027680] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:20.126 [2024-11-17 08:29:25.027733] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:20.126 [2024-11-17 08:29:25.027763] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:20.126 [2024-11-17 08:29:25.027889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:20.126 [2024-11-17 08:29:25.027930] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:20.126 [2024-11-17 08:29:25.027956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:20.126 [2024-11-17 08:29:25.027980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:20.126 [2024-11-17 08:29:25.028042] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:20.126 [2024-11-17 08:29:25.028060] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:20.126 [2024-11-17 08:29:25.028071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:20.126 [2024-11-17 08:29:25.028096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.028112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:20.126 [2024-11-17 08:29:25.028123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:29:20.126 [2024-11-17 08:29:25.028137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.028251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.126 [2024-11-17 08:29:25.028280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:20.126 [2024-11-17 08:29:25.028299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:20.126 [2024-11-17 08:29:25.028316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.126 [2024-11-17 08:29:25.028455] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:20.126 [2024-11-17 08:29:25.028499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:20.126 [2024-11-17 08:29:25.028522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:20.126 [2024-11-17 08:29:25.028571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:20.126 [2024-11-17 08:29:25.028600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.126 [2024-11-17 08:29:25.028627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:20.126 [2024-11-17 08:29:25.028645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:20.126 [2024-11-17 08:29:25.028656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:20.126 [2024-11-17 08:29:25.028665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:20.126 [2024-11-17 08:29:25.028675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:20.126 [2024-11-17 08:29:25.028684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:20.126 [2024-11-17 08:29:25.028727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:20.126 [2024-11-17 08:29:25.028773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:20.126 [2024-11-17 08:29:25.028828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:20.126 [2024-11-17 08:29:25.028865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:20.126 [2024-11-17 08:29:25.028911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:20.126 [2024-11-17 08:29:25.028943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:20.126 [2024-11-17 08:29:25.028960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:20.126 [2024-11-17 08:29:25.028973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.126 [2024-11-17 08:29:25.028990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:20.126 [2024-11-17 08:29:25.029010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:20.126 [2024-11-17 08:29:25.029027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:20.126 [2024-11-17 08:29:25.029045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:20.126 [2024-11-17 08:29:25.029063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:20.126 [2024-11-17 08:29:25.029079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.126 [2024-11-17 08:29:25.029153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:20.126 [2024-11-17 08:29:25.029176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:20.126 [2024-11-17 08:29:25.029189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.126 [2024-11-17 08:29:25.029206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:20.126 [2024-11-17 08:29:25.029228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:20.126 [2024-11-17 08:29:25.029249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:20.126 [2024-11-17 08:29:25.029271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:20.127 [2024-11-17 08:29:25.029291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:20.127 [2024-11-17 08:29:25.029311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:20.127 [2024-11-17 08:29:25.029330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:20.127 [2024-11-17 08:29:25.029349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:20.127 [2024-11-17 08:29:25.029367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:20.127 [2024-11-17 08:29:25.029386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:20.127 [2024-11-17 08:29:25.029403] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:20.127 [2024-11-17 08:29:25.029438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.127 [2024-11-17 08:29:25.029460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:20.127 [2024-11-17 08:29:25.029515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:20.127 [2024-11-17 08:29:25.029535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:20.127 [2024-11-17 08:29:25.029553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:20.127 [2024-11-17 08:29:25.029571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:20.127 [2024-11-17 08:29:25.029591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:20.127 [2024-11-17 08:29:25.029610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:20.127 [2024-11-17 08:29:25.029629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:20.127 [2024-11-17 08:29:25.029643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:20.127 [2024-11-17 08:29:25.029655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:20.127 [2024-11-17 08:29:25.029673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:20.127 [2024-11-17 08:29:25.029692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:20.127 [2024-11-17 08:29:25.029709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:20.127 [2024-11-17 08:29:25.029727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:20.127 [2024-11-17 08:29:25.029738] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:20.127 [2024-11-17 08:29:25.029748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:20.127 [2024-11-17 08:29:25.029759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:20.127 [2024-11-17 08:29:25.029769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:20.127 [2024-11-17 08:29:25.029778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:20.127 [2024-11-17 08:29:25.029788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:20.127 [2024-11-17 08:29:25.029799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.029809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:20.127 [2024-11-17 08:29:25.029827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:29:20.127 [2024-11-17 08:29:25.029845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.054202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.054248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:20.127 [2024-11-17 08:29:25.054280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.279 ms 00:29:20.127 [2024-11-17 08:29:25.054290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.054375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.054389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:20.127 [2024-11-17 08:29:25.054399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:20.127 [2024-11-17 08:29:25.054413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.094110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.094172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:20.127 [2024-11-17 08:29:25.094204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.631 ms 00:29:20.127 [2024-11-17 08:29:25.094215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.094270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.094287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:20.127 [2024-11-17 08:29:25.094298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:20.127 [2024-11-17 08:29:25.094308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.094466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.094495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:20.127 [2024-11-17 08:29:25.094517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:20.127 [2024-11-17 08:29:25.094544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.094729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.094776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:20.127 [2024-11-17 08:29:25.094795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:29:20.127 [2024-11-17 08:29:25.094814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.108374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.108414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:20.127 [2024-11-17 08:29:25.108444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.515 ms 00:29:20.127 [2024-11-17 08:29:25.108455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.108585] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:20.127 [2024-11-17 08:29:25.108605] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:20.127 [2024-11-17 08:29:25.108618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.108628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:20.127 [2024-11-17 08:29:25.108658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:20.127 [2024-11-17 08:29:25.108669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.119404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.119436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:20.127 [2024-11-17 08:29:25.119493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.704 ms 00:29:20.127 [2024-11-17 08:29:25.119512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.119657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.119675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:20.127 [2024-11-17 08:29:25.119688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:29:20.127 [2024-11-17 08:29:25.119699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.119882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.119923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:20.127 [2024-11-17 08:29:25.119947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:20.127 [2024-11-17 08:29:25.119965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.120754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.120797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:20.127 [2024-11-17 08:29:25.120810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:29:20.127 [2024-11-17 08:29:25.120820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.120841] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:20.127 [2024-11-17 08:29:25.120862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.120872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:20.127 [2024-11-17 08:29:25.120883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:20.127 [2024-11-17 08:29:25.120892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.127 [2024-11-17 08:29:25.132500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:20.127 [2024-11-17 08:29:25.132787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.127 [2024-11-17 08:29:25.132823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:20.127 [2024-11-17 08:29:25.132845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.872 ms 00:29:20.127 [2024-11-17 08:29:25.132865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.387 [2024-11-17 08:29:25.135060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.387 [2024-11-17 08:29:25.135139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:20.387 [2024-11-17 08:29:25.135170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:29:20.387 [2024-11-17 08:29:25.135181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.387 [2024-11-17 08:29:25.135286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.387 [2024-11-17 08:29:25.135304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:20.387 [2024-11-17 08:29:25.135316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:20.387 [2024-11-17 08:29:25.135326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.387 [2024-11-17 08:29:25.135375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.387 [2024-11-17 08:29:25.135398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:20.387 [2024-11-17 08:29:25.135423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:20.387 [2024-11-17 08:29:25.135441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.387 [2024-11-17 08:29:25.135550] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:20.387 [2024-11-17 08:29:25.135575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.387 [2024-11-17 08:29:25.135588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:20.387 [2024-11-17 08:29:25.135600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:20.387 [2024-11-17 08:29:25.135612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.387 [2024-11-17 08:29:25.164953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.387 [2024-11-17 08:29:25.165013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:20.387 [2024-11-17 08:29:25.165046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.300 ms 00:29:20.387 [2024-11-17 08:29:25.165057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.387 [2024-11-17 08:29:25.165187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.387 [2024-11-17 08:29:25.165207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:20.387 [2024-11-17 08:29:25.165219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:20.387 [2024-11-17 08:29:25.165229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.387 [2024-11-17 08:29:25.166861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.584 ms, result 0 00:29:21.324  [2024-11-17T08:29:27.274Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-17T08:29:28.212Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-17T08:29:29.592Z] Copying: 68/1024 [MB] (23 MBps) [2024-11-17T08:29:30.528Z] Copying: 91/1024 [MB] (22 MBps) [2024-11-17T08:29:31.466Z] Copying: 114/1024 [MB] (23 MBps) [2024-11-17T08:29:32.403Z] Copying: 137/1024 [MB] (23 MBps) [2024-11-17T08:29:33.340Z] Copying: 161/1024 [MB] (23 MBps) [2024-11-17T08:29:34.277Z] Copying: 185/1024 [MB] (23 MBps) [2024-11-17T08:29:35.214Z] Copying: 208/1024 [MB] (23 MBps) [2024-11-17T08:29:36.593Z] Copying: 231/1024 [MB] (23 MBps) [2024-11-17T08:29:37.530Z] Copying: 255/1024 [MB] (23 MBps) [2024-11-17T08:29:38.467Z] Copying: 278/1024 [MB] (23 MBps) [2024-11-17T08:29:39.412Z] Copying: 301/1024 [MB] (23 MBps) [2024-11-17T08:29:40.348Z] Copying: 325/1024 [MB] (23 MBps) [2024-11-17T08:29:41.295Z] Copying: 348/1024 [MB] (23 MBps) [2024-11-17T08:29:42.279Z] Copying: 371/1024 [MB] (22 MBps) [2024-11-17T08:29:43.215Z] Copying: 394/1024 [MB] (23 MBps) [2024-11-17T08:29:44.594Z] Copying: 417/1024 [MB] (22 MBps) [2024-11-17T08:29:45.531Z] Copying: 439/1024 [MB] (22 MBps) [2024-11-17T08:29:46.469Z] Copying: 463/1024 [MB] (23 MBps) [2024-11-17T08:29:47.407Z] Copying: 486/1024 [MB] (23 MBps) [2024-11-17T08:29:48.344Z] Copying: 509/1024 [MB] (22 MBps) [2024-11-17T08:29:49.282Z] Copying: 532/1024 [MB] (23 MBps) [2024-11-17T08:29:50.220Z] Copying: 555/1024 [MB] (23 MBps) [2024-11-17T08:29:51.600Z] Copying: 578/1024 [MB] (22 MBps) [2024-11-17T08:29:52.538Z] Copying: 601/1024 [MB] (23 MBps) [2024-11-17T08:29:53.475Z] Copying: 624/1024 [MB] (23 MBps) [2024-11-17T08:29:54.412Z] Copying: 647/1024 [MB] (22 MBps) [2024-11-17T08:29:55.349Z] Copying: 670/1024 [MB] (22 MBps) [2024-11-17T08:29:56.286Z] Copying: 692/1024 [MB] (22 MBps) [2024-11-17T08:29:57.224Z] Copying: 715/1024 [MB] (22 MBps) [2024-11-17T08:29:58.603Z] Copying: 738/1024 [MB] (23 MBps) [2024-11-17T08:29:59.539Z] Copying: 762/1024 [MB] (23 MBps) [2024-11-17T08:30:00.476Z] Copying: 785/1024 [MB] (23 MBps) [2024-11-17T08:30:01.414Z] Copying: 809/1024 [MB] (23 MBps) [2024-11-17T08:30:02.350Z] Copying: 833/1024 [MB] (24 MBps) [2024-11-17T08:30:03.288Z] Copying: 856/1024 [MB] (22 MBps) [2024-11-17T08:30:04.225Z] Copying: 879/1024 [MB] (22 MBps) [2024-11-17T08:30:05.601Z] Copying: 901/1024 [MB] (22 MBps) [2024-11-17T08:30:06.538Z] Copying: 924/1024 [MB] (22 MBps) [2024-11-17T08:30:07.474Z] Copying: 946/1024 [MB] (22 MBps) [2024-11-17T08:30:08.410Z] Copying: 968/1024 [MB] (22 MBps) [2024-11-17T08:30:09.354Z] Copying: 991/1024 [MB] (22 MBps) [2024-11-17T08:30:10.297Z] Copying: 1013/1024 [MB] (22 MBps) [2024-11-17T08:30:10.899Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-17T08:30:10.899Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-17 08:30:10.685964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.887 [2024-11-17 08:30:10.686116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:05.887 [2024-11-17 08:30:10.686139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:05.887 [2024-11-17 08:30:10.686149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.887 [2024-11-17 08:30:10.686910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:05.887 [2024-11-17 08:30:10.693011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.887 [2024-11-17 08:30:10.693048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:05.887 [2024-11-17 08:30:10.693076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.069 ms 00:30:05.887 [2024-11-17 08:30:10.693087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.887 [2024-11-17 08:30:10.704089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.887 [2024-11-17 08:30:10.704145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:05.887 [2024-11-17 08:30:10.704175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.356 ms 00:30:05.887 [2024-11-17 08:30:10.704185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.887 [2024-11-17 08:30:10.704217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.887 [2024-11-17 08:30:10.704232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:05.887 [2024-11-17 08:30:10.704243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:05.887 [2024-11-17 08:30:10.704253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.887 [2024-11-17 08:30:10.704303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.887 [2024-11-17 08:30:10.704317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:05.887 [2024-11-17 08:30:10.704331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:05.887 [2024-11-17 08:30:10.704341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.887 [2024-11-17 08:30:10.704357] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:05.887 [2024-11-17 08:30:10.704372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116480 / 261120 wr_cnt: 1 state: open 00:30:05.887 [2024-11-17 08:30:10.704384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:05.887 [2024-11-17 08:30:10.704425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.704992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:05.888 [2024-11-17 08:30:10.705370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:05.889 [2024-11-17 08:30:10.705470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:05.889 [2024-11-17 08:30:10.705480] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef9185a3-0142-4f78-a994-a1072823e554 00:30:05.889 [2024-11-17 08:30:10.705490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116480 00:30:05.889 [2024-11-17 08:30:10.705500] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116512 00:30:05.889 [2024-11-17 08:30:10.705525] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116480 00:30:05.889 [2024-11-17 08:30:10.705535] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:30:05.889 [2024-11-17 08:30:10.705545] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:05.889 [2024-11-17 08:30:10.705554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:05.889 [2024-11-17 08:30:10.705569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:05.889 [2024-11-17 08:30:10.705578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:05.889 [2024-11-17 08:30:10.705587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:05.889 [2024-11-17 08:30:10.705596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.889 [2024-11-17 08:30:10.705606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:05.889 [2024-11-17 08:30:10.705617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:30:05.889 [2024-11-17 08:30:10.705626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.889 [2024-11-17 08:30:10.719767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.889 [2024-11-17 08:30:10.719818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:05.889 [2024-11-17 08:30:10.719848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.121 ms 00:30:05.889 [2024-11-17 08:30:10.719864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.889 [2024-11-17 08:30:10.720273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.889 [2024-11-17 08:30:10.720297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:05.889 [2024-11-17 08:30:10.720310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:30:05.889 [2024-11-17 08:30:10.720320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.889 [2024-11-17 08:30:10.753359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.889 [2024-11-17 08:30:10.753397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:05.889 [2024-11-17 08:30:10.753441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.889 [2024-11-17 08:30:10.753451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.889 [2024-11-17 08:30:10.753529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.889 [2024-11-17 08:30:10.753547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:05.889 [2024-11-17 08:30:10.753558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.889 [2024-11-17 08:30:10.753567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.889 [2024-11-17 08:30:10.753680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.889 [2024-11-17 08:30:10.753699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:05.889 [2024-11-17 08:30:10.753710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.889 [2024-11-17 08:30:10.753726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.889 [2024-11-17 08:30:10.753746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.889 [2024-11-17 08:30:10.753759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:05.889 [2024-11-17 08:30:10.753770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.889 [2024-11-17 08:30:10.753779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.889 [2024-11-17 08:30:10.832094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.889 [2024-11-17 08:30:10.832150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:05.889 [2024-11-17 08:30:10.832187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.889 [2024-11-17 08:30:10.832196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.898125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.148 [2024-11-17 08:30:10.898223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:06.148 [2024-11-17 08:30:10.898276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.148 [2024-11-17 08:30:10.898287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.898357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.148 [2024-11-17 08:30:10.898373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:06.148 [2024-11-17 08:30:10.898384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.148 [2024-11-17 08:30:10.898408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.898490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.148 [2024-11-17 08:30:10.898521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:06.148 [2024-11-17 08:30:10.898532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.148 [2024-11-17 08:30:10.898542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.898649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.148 [2024-11-17 08:30:10.898669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:06.148 [2024-11-17 08:30:10.898680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.148 [2024-11-17 08:30:10.898690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.898732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.148 [2024-11-17 08:30:10.898750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:06.148 [2024-11-17 08:30:10.898761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.148 [2024-11-17 08:30:10.898771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.898811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.148 [2024-11-17 08:30:10.898825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:06.148 [2024-11-17 08:30:10.898835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.148 [2024-11-17 08:30:10.898845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.898896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.148 [2024-11-17 08:30:10.898912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:06.148 [2024-11-17 08:30:10.898922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.148 [2024-11-17 08:30:10.898932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.148 [2024-11-17 08:30:10.899060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 217.365 ms, result 0 00:30:07.084 00:30:07.084 00:30:07.084 08:30:12 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:07.344 [2024-11-17 08:30:12.166565] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:07.344 [2024-11-17 08:30:12.166741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82834 ] 00:30:07.344 [2024-11-17 08:30:12.341995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.603 [2024-11-17 08:30:12.430560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.863 [2024-11-17 08:30:12.683934] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:07.863 [2024-11-17 08:30:12.684036] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:07.863 [2024-11-17 08:30:12.840322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.840387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:07.863 [2024-11-17 08:30:12.840426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:07.863 [2024-11-17 08:30:12.840437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.840496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.840514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:07.863 [2024-11-17 08:30:12.840531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:07.863 [2024-11-17 08:30:12.840541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.840570] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:07.863 [2024-11-17 08:30:12.841511] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:07.863 [2024-11-17 08:30:12.841566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.841579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:07.863 [2024-11-17 08:30:12.841590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:30:07.863 [2024-11-17 08:30:12.841600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.842056] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:07.863 [2024-11-17 08:30:12.842126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.842156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:07.863 [2024-11-17 08:30:12.842175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:07.863 [2024-11-17 08:30:12.842186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.842241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.842275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:07.863 [2024-11-17 08:30:12.842287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:07.863 [2024-11-17 08:30:12.842298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.842702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.842735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:07.863 [2024-11-17 08:30:12.842749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:30:07.863 [2024-11-17 08:30:12.842760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.842844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.842863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:07.863 [2024-11-17 08:30:12.842874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:07.863 [2024-11-17 08:30:12.842885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.842917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.842933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:07.863 [2024-11-17 08:30:12.842945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:07.863 [2024-11-17 08:30:12.842961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.842991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:07.863 [2024-11-17 08:30:12.847022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.847103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:07.863 [2024-11-17 08:30:12.847136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.038 ms 00:30:07.863 [2024-11-17 08:30:12.847147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.847184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.847200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:07.863 [2024-11-17 08:30:12.847211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:07.863 [2024-11-17 08:30:12.847221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.847287] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:07.863 [2024-11-17 08:30:12.847330] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:07.863 [2024-11-17 08:30:12.847388] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:07.863 [2024-11-17 08:30:12.847408] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:07.863 [2024-11-17 08:30:12.847552] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:07.863 [2024-11-17 08:30:12.847569] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:07.863 [2024-11-17 08:30:12.847583] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:07.863 [2024-11-17 08:30:12.847598] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:07.863 [2024-11-17 08:30:12.847611] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:07.863 [2024-11-17 08:30:12.847623] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:07.863 [2024-11-17 08:30:12.847639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:07.863 [2024-11-17 08:30:12.847650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:07.863 [2024-11-17 08:30:12.847660] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:07.863 [2024-11-17 08:30:12.847672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.847683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:07.863 [2024-11-17 08:30:12.847695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:30:07.863 [2024-11-17 08:30:12.847706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.847800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.863 [2024-11-17 08:30:12.847817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:07.863 [2024-11-17 08:30:12.847829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:07.863 [2024-11-17 08:30:12.847859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:07.863 [2024-11-17 08:30:12.847968] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:07.863 [2024-11-17 08:30:12.847988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:07.863 [2024-11-17 08:30:12.848001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:07.863 [2024-11-17 08:30:12.848012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:07.863 [2024-11-17 08:30:12.848024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:07.863 [2024-11-17 08:30:12.848034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:07.863 [2024-11-17 08:30:12.848044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:07.863 [2024-11-17 08:30:12.848054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:07.863 [2024-11-17 08:30:12.848065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:07.863 [2024-11-17 08:30:12.848075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:07.863 [2024-11-17 08:30:12.848085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:07.863 [2024-11-17 08:30:12.848095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:07.863 [2024-11-17 08:30:12.848105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:07.863 [2024-11-17 08:30:12.848134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:07.863 [2024-11-17 08:30:12.848145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:07.863 [2024-11-17 08:30:12.848157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:07.863 [2024-11-17 08:30:12.848167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:07.863 [2024-11-17 08:30:12.848190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:07.863 [2024-11-17 08:30:12.848201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:07.863 [2024-11-17 08:30:12.848212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:07.863 [2024-11-17 08:30:12.848222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:07.863 [2024-11-17 08:30:12.848232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:07.863 [2024-11-17 08:30:12.848242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:07.863 [2024-11-17 08:30:12.848252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:07.863 [2024-11-17 08:30:12.848261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:07.863 [2024-11-17 08:30:12.848272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:07.863 [2024-11-17 08:30:12.848282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:07.864 [2024-11-17 08:30:12.848291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:07.864 [2024-11-17 08:30:12.848301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:07.864 [2024-11-17 08:30:12.848311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:07.864 [2024-11-17 08:30:12.848321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:07.864 [2024-11-17 08:30:12.848331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:07.864 [2024-11-17 08:30:12.848340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:07.864 [2024-11-17 08:30:12.848351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:07.864 [2024-11-17 08:30:12.848361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:07.864 [2024-11-17 08:30:12.848371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:07.864 [2024-11-17 08:30:12.848381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:07.864 [2024-11-17 08:30:12.848391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:07.864 [2024-11-17 08:30:12.848401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:07.864 [2024-11-17 08:30:12.848411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:07.864 [2024-11-17 08:30:12.848420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:07.864 [2024-11-17 08:30:12.848431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:07.864 [2024-11-17 08:30:12.848440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:07.864 [2024-11-17 08:30:12.848450] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:07.864 [2024-11-17 08:30:12.848461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:07.864 [2024-11-17 08:30:12.848471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:07.864 [2024-11-17 08:30:12.848482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:07.864 [2024-11-17 08:30:12.848493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:07.864 [2024-11-17 08:30:12.848504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:07.864 [2024-11-17 08:30:12.848514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:07.864 [2024-11-17 08:30:12.848524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:07.864 [2024-11-17 08:30:12.848534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:07.864 [2024-11-17 08:30:12.848544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:07.864 [2024-11-17 08:30:12.848556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:07.864 [2024-11-17 08:30:12.848574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:07.864 [2024-11-17 08:30:12.848587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:07.864 [2024-11-17 08:30:12.848598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:07.864 [2024-11-17 08:30:12.848609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:07.864 [2024-11-17 08:30:12.848619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:07.864 [2024-11-17 08:30:12.848630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:07.864 [2024-11-17 08:30:12.848641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:07.864 [2024-11-17 08:30:12.848652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:07.864 [2024-11-17 08:30:12.848662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:07.864 [2024-11-17 08:30:12.848673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:07.864 [2024-11-17 08:30:12.848684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:07.864 [2024-11-17 08:30:12.848694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:07.864 [2024-11-17 08:30:12.848705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:07.864 [2024-11-17 08:30:12.848716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:07.864 [2024-11-17 08:30:12.848727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:07.864 [2024-11-17 08:30:12.848737] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:07.864 [2024-11-17 08:30:12.848750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:07.864 [2024-11-17 08:30:12.848762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:07.864 [2024-11-17 08:30:12.848774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:07.864 [2024-11-17 08:30:12.848785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:07.864 [2024-11-17 08:30:12.848796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:07.864 [2024-11-17 08:30:12.848807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:07.864 [2024-11-17 08:30:12.848818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:07.864 [2024-11-17 08:30:12.848829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:30:07.864 [2024-11-17 08:30:12.848840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.123 [2024-11-17 08:30:12.875129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.123 [2024-11-17 08:30:12.875194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.123 [2024-11-17 08:30:12.875226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.230 ms 00:30:08.123 [2024-11-17 08:30:12.875237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.123 [2024-11-17 08:30:12.875331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.123 [2024-11-17 08:30:12.875347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.123 [2024-11-17 08:30:12.875365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:08.123 [2024-11-17 08:30:12.875375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.123 [2024-11-17 08:30:12.923966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.123 [2024-11-17 08:30:12.924029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.123 [2024-11-17 08:30:12.924061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.452 ms 00:30:08.123 [2024-11-17 08:30:12.924071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.123 [2024-11-17 08:30:12.924146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.123 [2024-11-17 08:30:12.924165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.123 [2024-11-17 08:30:12.924177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:08.123 [2024-11-17 08:30:12.924187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.123 [2024-11-17 08:30:12.924373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.123 [2024-11-17 08:30:12.924392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.123 [2024-11-17 08:30:12.924405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:30:08.123 [2024-11-17 08:30:12.924416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.123 [2024-11-17 08:30:12.924558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.123 [2024-11-17 08:30:12.924594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.124 [2024-11-17 08:30:12.924607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:30:08.124 [2024-11-17 08:30:12.924618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.938689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.938742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.124 [2024-11-17 08:30:12.938772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.044 ms 00:30:08.124 [2024-11-17 08:30:12.938782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.938939] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:08.124 [2024-11-17 08:30:12.938992] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:08.124 [2024-11-17 08:30:12.939021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.939037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:08.124 [2024-11-17 08:30:12.939049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:30:08.124 [2024-11-17 08:30:12.939061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.950195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.950244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:08.124 [2024-11-17 08:30:12.950273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.106 ms 00:30:08.124 [2024-11-17 08:30:12.950283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.950389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.950405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:08.124 [2024-11-17 08:30:12.950416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:08.124 [2024-11-17 08:30:12.950432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.950538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.950557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:08.124 [2024-11-17 08:30:12.950569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:08.124 [2024-11-17 08:30:12.950580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.951311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.951372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:08.124 [2024-11-17 08:30:12.951386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:30:08.124 [2024-11-17 08:30:12.951397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.951442] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:08.124 [2024-11-17 08:30:12.951497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.951509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:08.124 [2024-11-17 08:30:12.951522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:08.124 [2024-11-17 08:30:12.951533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.962571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:08.124 [2024-11-17 08:30:12.962784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.962802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:08.124 [2024-11-17 08:30:12.962814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.226 ms 00:30:08.124 [2024-11-17 08:30:12.962840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.964993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.965043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:08.124 [2024-11-17 08:30:12.965072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:30:08.124 [2024-11-17 08:30:12.965082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.965169] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:08.124 [2024-11-17 08:30:12.965625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.965657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:08.124 [2024-11-17 08:30:12.965670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:30:08.124 [2024-11-17 08:30:12.965681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.965724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.965741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:08.124 [2024-11-17 08:30:12.965752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:08.124 [2024-11-17 08:30:12.965762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.965802] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:08.124 [2024-11-17 08:30:12.965819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.965830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:08.124 [2024-11-17 08:30:12.965841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:08.124 [2024-11-17 08:30:12.965851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.992248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.992303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:08.124 [2024-11-17 08:30:12.992334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.374 ms 00:30:08.124 [2024-11-17 08:30:12.992345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.992426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.124 [2024-11-17 08:30:12.992445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:08.124 [2024-11-17 08:30:12.992457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:08.124 [2024-11-17 08:30:12.992466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.124 [2024-11-17 08:30:12.993787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.881 ms, result 0 00:30:09.501  [2024-11-17T08:30:15.449Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-17T08:30:16.386Z] Copying: 40/1024 [MB] (22 MBps) [2024-11-17T08:30:17.322Z] Copying: 62/1024 [MB] (21 MBps) [2024-11-17T08:30:18.258Z] Copying: 83/1024 [MB] (21 MBps) [2024-11-17T08:30:19.194Z] Copying: 104/1024 [MB] (21 MBps) [2024-11-17T08:30:20.570Z] Copying: 126/1024 [MB] (22 MBps) [2024-11-17T08:30:21.507Z] Copying: 148/1024 [MB] (22 MBps) [2024-11-17T08:30:22.441Z] Copying: 170/1024 [MB] (21 MBps) [2024-11-17T08:30:23.377Z] Copying: 192/1024 [MB] (21 MBps) [2024-11-17T08:30:24.311Z] Copying: 214/1024 [MB] (21 MBps) [2024-11-17T08:30:25.247Z] Copying: 235/1024 [MB] (21 MBps) [2024-11-17T08:30:26.182Z] Copying: 257/1024 [MB] (21 MBps) [2024-11-17T08:30:27.557Z] Copying: 279/1024 [MB] (21 MBps) [2024-11-17T08:30:28.493Z] Copying: 301/1024 [MB] (21 MBps) [2024-11-17T08:30:29.427Z] Copying: 322/1024 [MB] (21 MBps) [2024-11-17T08:30:30.363Z] Copying: 344/1024 [MB] (21 MBps) [2024-11-17T08:30:31.299Z] Copying: 366/1024 [MB] (21 MBps) [2024-11-17T08:30:32.235Z] Copying: 387/1024 [MB] (21 MBps) [2024-11-17T08:30:33.611Z] Copying: 409/1024 [MB] (21 MBps) [2024-11-17T08:30:34.179Z] Copying: 431/1024 [MB] (21 MBps) [2024-11-17T08:30:35.553Z] Copying: 453/1024 [MB] (21 MBps) [2024-11-17T08:30:36.488Z] Copying: 475/1024 [MB] (22 MBps) [2024-11-17T08:30:37.423Z] Copying: 497/1024 [MB] (22 MBps) [2024-11-17T08:30:38.361Z] Copying: 518/1024 [MB] (21 MBps) [2024-11-17T08:30:39.352Z] Copying: 540/1024 [MB] (21 MBps) [2024-11-17T08:30:40.289Z] Copying: 563/1024 [MB] (22 MBps) [2024-11-17T08:30:41.225Z] Copying: 585/1024 [MB] (22 MBps) [2024-11-17T08:30:42.603Z] Copying: 606/1024 [MB] (21 MBps) [2024-11-17T08:30:43.540Z] Copying: 628/1024 [MB] (22 MBps) [2024-11-17T08:30:44.476Z] Copying: 650/1024 [MB] (21 MBps) [2024-11-17T08:30:45.413Z] Copying: 673/1024 [MB] (22 MBps) [2024-11-17T08:30:46.351Z] Copying: 695/1024 [MB] (21 MBps) [2024-11-17T08:30:47.289Z] Copying: 717/1024 [MB] (22 MBps) [2024-11-17T08:30:48.226Z] Copying: 738/1024 [MB] (21 MBps) [2024-11-17T08:30:49.606Z] Copying: 761/1024 [MB] (22 MBps) [2024-11-17T08:30:50.174Z] Copying: 783/1024 [MB] (22 MBps) [2024-11-17T08:30:51.552Z] Copying: 805/1024 [MB] (21 MBps) [2024-11-17T08:30:52.489Z] Copying: 826/1024 [MB] (21 MBps) [2024-11-17T08:30:53.426Z] Copying: 848/1024 [MB] (21 MBps) [2024-11-17T08:30:54.363Z] Copying: 870/1024 [MB] (22 MBps) [2024-11-17T08:30:55.300Z] Copying: 892/1024 [MB] (21 MBps) [2024-11-17T08:30:56.237Z] Copying: 915/1024 [MB] (22 MBps) [2024-11-17T08:30:57.175Z] Copying: 937/1024 [MB] (22 MBps) [2024-11-17T08:30:58.554Z] Copying: 959/1024 [MB] (21 MBps) [2024-11-17T08:30:59.492Z] Copying: 981/1024 [MB] (22 MBps) [2024-11-17T08:31:00.430Z] Copying: 1004/1024 [MB] (22 MBps) [2024-11-17T08:31:00.430Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-17 08:31:00.348971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.418 [2024-11-17 08:31:00.349095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:55.418 [2024-11-17 08:31:00.349131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:55.418 [2024-11-17 08:31:00.349150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.418 [2024-11-17 08:31:00.349200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:55.418 [2024-11-17 08:31:00.353248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.418 [2024-11-17 08:31:00.353301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:55.418 [2024-11-17 08:31:00.353316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:30:55.418 [2024-11-17 08:31:00.353327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.418 [2024-11-17 08:31:00.353596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.418 [2024-11-17 08:31:00.353617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:55.418 [2024-11-17 08:31:00.353631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:30:55.418 [2024-11-17 08:31:00.353643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.418 [2024-11-17 08:31:00.353677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.418 [2024-11-17 08:31:00.353693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:55.418 [2024-11-17 08:31:00.353710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:55.418 [2024-11-17 08:31:00.353721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.418 [2024-11-17 08:31:00.353783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.418 [2024-11-17 08:31:00.353817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:55.418 [2024-11-17 08:31:00.353829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:55.418 [2024-11-17 08:31:00.353840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.418 [2024-11-17 08:31:00.353860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:55.418 [2024-11-17 08:31:00.353878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:30:55.418 [2024-11-17 08:31:00.353892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.353992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.354003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.354014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.354026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.354037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:55.418 [2024-11-17 08:31:00.354048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.354993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.355004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:55.419 [2024-11-17 08:31:00.355023] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:55.419 [2024-11-17 08:31:00.355034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ef9185a3-0142-4f78-a994-a1072823e554 00:30:55.419 [2024-11-17 08:31:00.355045] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:30:55.420 [2024-11-17 08:31:00.355056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 14624 00:30:55.420 [2024-11-17 08:31:00.355066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14592 00:30:55.420 [2024-11-17 08:31:00.355089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0022 00:30:55.420 [2024-11-17 08:31:00.355108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:55.420 [2024-11-17 08:31:00.355120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:55.420 [2024-11-17 08:31:00.355131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:55.420 [2024-11-17 08:31:00.355141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:55.420 [2024-11-17 08:31:00.355151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:55.420 [2024-11-17 08:31:00.355161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.420 [2024-11-17 08:31:00.355172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:55.420 [2024-11-17 08:31:00.355183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:30:55.420 [2024-11-17 08:31:00.355194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.420 [2024-11-17 08:31:00.371292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.420 [2024-11-17 08:31:00.371366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:55.420 [2024-11-17 08:31:00.371404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.074 ms 00:30:55.420 [2024-11-17 08:31:00.371415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.420 [2024-11-17 08:31:00.371859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.420 [2024-11-17 08:31:00.371890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:55.420 [2024-11-17 08:31:00.371904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:30:55.420 [2024-11-17 08:31:00.371914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.420 [2024-11-17 08:31:00.407870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.420 [2024-11-17 08:31:00.407909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:55.420 [2024-11-17 08:31:00.407938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.420 [2024-11-17 08:31:00.407948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.420 [2024-11-17 08:31:00.408004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.420 [2024-11-17 08:31:00.408019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:55.420 [2024-11-17 08:31:00.408030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.420 [2024-11-17 08:31:00.408040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.420 [2024-11-17 08:31:00.408128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.420 [2024-11-17 08:31:00.408154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:55.420 [2024-11-17 08:31:00.408181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.420 [2024-11-17 08:31:00.408190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.420 [2024-11-17 08:31:00.408236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.420 [2024-11-17 08:31:00.408250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:55.420 [2024-11-17 08:31:00.408261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.420 [2024-11-17 08:31:00.408271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.487759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.487823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:55.679 [2024-11-17 08:31:00.487855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.487865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.552700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.552750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:55.679 [2024-11-17 08:31:00.552781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.552791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.552875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.552892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:55.679 [2024-11-17 08:31:00.552910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.552920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.552962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.552976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:55.679 [2024-11-17 08:31:00.552987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.552996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.553131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.553172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:55.679 [2024-11-17 08:31:00.553186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.553202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.553243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.553261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:55.679 [2024-11-17 08:31:00.553273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.553283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.553325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.553340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:55.679 [2024-11-17 08:31:00.553351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.553362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.553417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:55.679 [2024-11-17 08:31:00.553434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:55.679 [2024-11-17 08:31:00.553445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:55.679 [2024-11-17 08:31:00.553456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.679 [2024-11-17 08:31:00.553589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 204.599 ms, result 0 00:30:56.616 00:30:56.616 00:30:56.616 08:31:01 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:58.521 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:58.521 Process with pid 81252 is not found 00:30:58.521 Remove shared memory files 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 81252 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 81252 ']' 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 81252 00:30:58.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81252) - No such process 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 81252 is not found' 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_band_md /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_l2p_l1 /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_l2p_l2 /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_l2p_l2_ctx /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_nvc_md /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_p2l_pool /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_sb /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_sb_shm /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_trim_bitmap /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_trim_log /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_trim_md /dev/hugepages/ftl_ef9185a3-0142-4f78-a994-a1072823e554_vmap 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:30:58.521 00:30:58.521 real 3m32.183s 00:30:58.521 user 3m20.009s 00:30:58.521 sys 0m13.663s 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.521 08:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 ************************************ 00:30:58.521 END TEST ftl_restore_fast 00:30:58.521 ************************************ 00:30:58.521 08:31:03 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:58.521 08:31:03 ftl -- ftl/ftl.sh@14 -- # killprocess 73264 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@954 -- # '[' -z 73264 ']' 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@958 -- # kill -0 73264 00:30:58.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73264) - No such process 00:30:58.521 Process with pid 73264 is not found 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 73264 is not found' 00:30:58.521 08:31:03 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:58.521 08:31:03 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83342 00:30:58.521 08:31:03 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83342 00:30:58.521 08:31:03 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@835 -- # '[' -z 83342 ']' 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.521 08:31:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:58.521 [2024-11-17 08:31:03.368383] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:30:58.521 [2024-11-17 08:31:03.368591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83342 ] 00:30:58.779 [2024-11-17 08:31:03.541500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.779 [2024-11-17 08:31:03.622800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.348 08:31:04 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:59.348 08:31:04 ftl -- common/autotest_common.sh@868 -- # return 0 00:30:59.348 08:31:04 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:59.607 nvme0n1 00:30:59.607 08:31:04 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:59.607 08:31:04 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:59.607 08:31:04 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:59.866 08:31:04 ftl -- ftl/common.sh@28 -- # stores=838605d8-88d5-4e53-b5a7-1dea96c9a0d1 00:30:59.866 08:31:04 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:59.866 08:31:04 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 838605d8-88d5-4e53-b5a7-1dea96c9a0d1 00:31:00.124 08:31:05 ftl -- ftl/ftl.sh@23 -- # killprocess 83342 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 83342 ']' 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@958 -- # kill -0 83342 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@959 -- # uname 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83342 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.124 killing process with pid 83342 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83342' 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@973 -- # kill 83342 00:31:00.124 08:31:05 ftl -- common/autotest_common.sh@978 -- # wait 83342 00:31:02.031 08:31:06 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:02.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:02.031 Waiting for block devices as requested 00:31:02.031 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:02.290 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:02.290 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:02.290 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:07.588 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:07.588 Remove shared memory files 00:31:07.588 08:31:12 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:07.588 08:31:12 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:07.588 08:31:12 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:07.588 08:31:12 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:07.588 08:31:12 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:07.588 08:31:12 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:07.588 08:31:12 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:07.588 00:31:07.588 real 15m29.507s 00:31:07.588 user 18m12.268s 00:31:07.588 sys 1m38.680s 00:31:07.588 08:31:12 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:07.588 08:31:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:07.588 ************************************ 00:31:07.588 END TEST ftl 00:31:07.588 ************************************ 00:31:07.588 08:31:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:07.588 08:31:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:07.588 08:31:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:07.588 08:31:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:07.588 08:31:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:07.588 08:31:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:07.588 08:31:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:07.588 08:31:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:07.588 08:31:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:07.588 08:31:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:07.588 08:31:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.588 08:31:12 -- common/autotest_common.sh@10 -- # set +x 00:31:07.588 08:31:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:07.588 08:31:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:07.588 08:31:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:07.588 08:31:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.969 INFO: APP EXITING 00:31:08.969 INFO: killing all VMs 00:31:08.969 INFO: killing vhost app 00:31:08.969 INFO: EXIT DONE 00:31:09.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:09.797 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:09.797 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:09.797 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:09.797 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:10.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:10.625 Cleaning 00:31:10.625 Removing: /var/run/dpdk/spdk0/config 00:31:10.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:10.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:10.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:10.625 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:10.625 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:10.625 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:10.625 Removing: /var/run/dpdk/spdk0 00:31:10.625 Removing: /var/run/dpdk/spdk_pid57537 00:31:10.625 Removing: /var/run/dpdk/spdk_pid57745 00:31:10.625 Removing: /var/run/dpdk/spdk_pid57963 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58061 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58106 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58229 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58247 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58446 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58544 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58646 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58757 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58854 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58888 00:31:10.625 Removing: /var/run/dpdk/spdk_pid58930 00:31:10.625 Removing: /var/run/dpdk/spdk_pid59006 00:31:10.625 Removing: /var/run/dpdk/spdk_pid59112 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59565 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59629 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59692 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59708 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59822 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59838 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59951 00:31:10.884 Removing: /var/run/dpdk/spdk_pid59967 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60033 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60051 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60110 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60122 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60302 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60338 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60427 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60605 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60689 00:31:10.884 Removing: /var/run/dpdk/spdk_pid60733 00:31:10.884 Removing: /var/run/dpdk/spdk_pid61186 00:31:10.884 Removing: /var/run/dpdk/spdk_pid61284 00:31:10.884 Removing: /var/run/dpdk/spdk_pid61388 00:31:10.884 Removing: /var/run/dpdk/spdk_pid61442 00:31:10.884 Removing: /var/run/dpdk/spdk_pid61467 00:31:10.884 Removing: /var/run/dpdk/spdk_pid61546 00:31:10.884 Removing: /var/run/dpdk/spdk_pid62172 00:31:10.884 Removing: /var/run/dpdk/spdk_pid62214 00:31:10.884 Removing: /var/run/dpdk/spdk_pid62712 00:31:10.884 Removing: /var/run/dpdk/spdk_pid62810 00:31:10.884 Removing: /var/run/dpdk/spdk_pid62919 00:31:10.884 Removing: /var/run/dpdk/spdk_pid62972 00:31:10.884 Removing: /var/run/dpdk/spdk_pid62998 00:31:10.884 Removing: /var/run/dpdk/spdk_pid63023 00:31:10.884 Removing: /var/run/dpdk/spdk_pid64895 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65032 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65046 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65059 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65098 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65102 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65114 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65159 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65168 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65180 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65225 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65229 00:31:10.884 Removing: /var/run/dpdk/spdk_pid65241 00:31:10.884 Removing: /var/run/dpdk/spdk_pid66630 00:31:10.885 Removing: /var/run/dpdk/spdk_pid66732 00:31:10.885 Removing: /var/run/dpdk/spdk_pid68142 00:31:10.885 Removing: /var/run/dpdk/spdk_pid69501 00:31:10.885 Removing: /var/run/dpdk/spdk_pid69612 00:31:10.885 Removing: /var/run/dpdk/spdk_pid69705 00:31:10.885 Removing: /var/run/dpdk/spdk_pid69809 00:31:10.885 Removing: /var/run/dpdk/spdk_pid69930 00:31:10.885 Removing: /var/run/dpdk/spdk_pid70000 00:31:10.885 Removing: /var/run/dpdk/spdk_pid70150 00:31:10.885 Removing: /var/run/dpdk/spdk_pid70516 00:31:10.885 Removing: /var/run/dpdk/spdk_pid70547 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71018 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71199 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71294 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71405 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71448 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71479 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71773 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71832 00:31:10.885 Removing: /var/run/dpdk/spdk_pid71917 00:31:10.885 Removing: /var/run/dpdk/spdk_pid72322 00:31:10.885 Removing: /var/run/dpdk/spdk_pid72468 00:31:10.885 Removing: /var/run/dpdk/spdk_pid73264 00:31:10.885 Removing: /var/run/dpdk/spdk_pid73404 00:31:10.885 Removing: /var/run/dpdk/spdk_pid73592 00:31:10.885 Removing: /var/run/dpdk/spdk_pid73697 00:31:10.885 Removing: /var/run/dpdk/spdk_pid74071 00:31:10.885 Removing: /var/run/dpdk/spdk_pid74360 00:31:10.885 Removing: /var/run/dpdk/spdk_pid74703 00:31:10.885 Removing: /var/run/dpdk/spdk_pid74891 00:31:10.885 Removing: /var/run/dpdk/spdk_pid75038 00:31:10.885 Removing: /var/run/dpdk/spdk_pid75091 00:31:10.885 Removing: /var/run/dpdk/spdk_pid75240 00:31:10.885 Removing: /var/run/dpdk/spdk_pid75271 00:31:10.885 Removing: /var/run/dpdk/spdk_pid75329 00:31:10.885 Removing: /var/run/dpdk/spdk_pid75538 00:31:10.885 Removing: /var/run/dpdk/spdk_pid75759 00:31:10.885 Removing: /var/run/dpdk/spdk_pid76211 00:31:10.885 Removing: /var/run/dpdk/spdk_pid76703 00:31:10.885 Removing: /var/run/dpdk/spdk_pid77172 00:31:10.885 Removing: /var/run/dpdk/spdk_pid77740 00:31:11.145 Removing: /var/run/dpdk/spdk_pid77882 00:31:11.145 Removing: /var/run/dpdk/spdk_pid77969 00:31:11.145 Removing: /var/run/dpdk/spdk_pid78695 00:31:11.145 Removing: /var/run/dpdk/spdk_pid78760 00:31:11.145 Removing: /var/run/dpdk/spdk_pid79243 00:31:11.145 Removing: /var/run/dpdk/spdk_pid79669 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80228 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80348 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80394 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80454 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80515 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80575 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80770 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80849 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80913 00:31:11.145 Removing: /var/run/dpdk/spdk_pid80980 00:31:11.145 Removing: /var/run/dpdk/spdk_pid81015 00:31:11.145 Removing: /var/run/dpdk/spdk_pid81082 00:31:11.145 Removing: /var/run/dpdk/spdk_pid81252 00:31:11.145 Removing: /var/run/dpdk/spdk_pid81467 00:31:11.145 Removing: /var/run/dpdk/spdk_pid81903 00:31:11.145 Removing: /var/run/dpdk/spdk_pid82378 00:31:11.145 Removing: /var/run/dpdk/spdk_pid82834 00:31:11.145 Removing: /var/run/dpdk/spdk_pid83342 00:31:11.145 Clean 00:31:11.145 08:31:16 -- common/autotest_common.sh@1453 -- # return 0 00:31:11.145 08:31:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:11.145 08:31:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:11.145 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.145 08:31:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:11.145 08:31:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:11.145 08:31:16 -- common/autotest_common.sh@10 -- # set +x 00:31:11.145 08:31:16 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:11.145 08:31:16 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:11.145 08:31:16 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:11.145 08:31:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:11.145 08:31:16 -- spdk/autotest.sh@398 -- # hostname 00:31:11.145 08:31:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:11.404 geninfo: WARNING: invalid characters removed from testname! 00:31:37.959 08:31:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:37.959 08:31:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:39.863 08:31:44 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:42.395 08:31:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:44.938 08:31:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:47.471 08:31:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:50.001 08:31:54 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:50.001 08:31:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:31:50.001 08:31:54 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:31:50.001 08:31:54 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:50.001 08:31:54 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:31:50.001 08:31:54 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:50.001 + [[ -n 5299 ]] 00:31:50.001 + sudo kill 5299 00:31:50.009 [Pipeline] } 00:31:50.023 [Pipeline] // timeout 00:31:50.027 [Pipeline] } 00:31:50.039 [Pipeline] // stage 00:31:50.044 [Pipeline] } 00:31:50.056 [Pipeline] // catchError 00:31:50.065 [Pipeline] stage 00:31:50.066 [Pipeline] { (Stop VM) 00:31:50.077 [Pipeline] sh 00:31:50.356 + vagrant halt 00:31:53.643 ==> default: Halting domain... 00:32:00.236 [Pipeline] sh 00:32:00.564 + vagrant destroy -f 00:32:03.096 ==> default: Removing domain... 00:32:03.367 [Pipeline] sh 00:32:03.649 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:03.658 [Pipeline] } 00:32:03.673 [Pipeline] // stage 00:32:03.679 [Pipeline] } 00:32:03.693 [Pipeline] // dir 00:32:03.698 [Pipeline] } 00:32:03.715 [Pipeline] // wrap 00:32:03.722 [Pipeline] } 00:32:03.736 [Pipeline] // catchError 00:32:03.746 [Pipeline] stage 00:32:03.749 [Pipeline] { (Epilogue) 00:32:03.768 [Pipeline] sh 00:32:04.054 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:09.338 [Pipeline] catchError 00:32:09.340 [Pipeline] { 00:32:09.353 [Pipeline] sh 00:32:09.635 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:09.894 Artifacts sizes are good 00:32:09.904 [Pipeline] } 00:32:09.918 [Pipeline] // catchError 00:32:09.929 [Pipeline] archiveArtifacts 00:32:09.936 Archiving artifacts 00:32:10.040 [Pipeline] cleanWs 00:32:10.053 [WS-CLEANUP] Deleting project workspace... 00:32:10.053 [WS-CLEANUP] Deferred wipeout is used... 00:32:10.059 [WS-CLEANUP] done 00:32:10.061 [Pipeline] } 00:32:10.076 [Pipeline] // stage 00:32:10.082 [Pipeline] } 00:32:10.095 [Pipeline] // node 00:32:10.101 [Pipeline] End of Pipeline 00:32:10.143 Finished: SUCCESS