00:00:00.000 Started by upstream project "autotest-nightly" build number 4171 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3533 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.124 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.125 The recommended git tool is: git 00:00:00.125 using credential 00000000-0000-0000-0000-000000000002 00:00:00.127 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.194 Fetching changes from the remote Git repository 00:00:00.196 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.252 Using shallow fetch with depth 1 00:00:00.252 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.252 > git --version # timeout=10 00:00:00.302 > git --version # 'git version 2.39.2' 00:00:00.302 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.337 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.337 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.451 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.461 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.472 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:06.472 > git config core.sparsecheckout # timeout=10 00:00:06.482 > git read-tree -mu HEAD # timeout=10 00:00:06.497 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:06.517 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:06.517 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:06.643 [Pipeline] Start of Pipeline 00:00:06.657 [Pipeline] library 00:00:06.658 Loading library shm_lib@master 00:00:06.659 Library shm_lib@master is cached. Copying from home. 00:00:06.672 [Pipeline] node 00:00:06.688 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.689 [Pipeline] { 00:00:06.698 [Pipeline] catchError 00:00:06.699 [Pipeline] { 00:00:06.707 [Pipeline] wrap 00:00:06.716 [Pipeline] { 00:00:06.725 [Pipeline] stage 00:00:06.727 [Pipeline] { (Prologue) 00:00:06.744 [Pipeline] echo 00:00:06.746 Node: VM-host-SM38 00:00:06.753 [Pipeline] cleanWs 00:00:06.765 [WS-CLEANUP] Deleting project workspace... 00:00:06.765 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.772 [WS-CLEANUP] done 00:00:07.023 [Pipeline] setCustomBuildProperty 00:00:07.113 [Pipeline] httpRequest 00:00:08.362 [Pipeline] echo 00:00:08.364 Sorcerer 10.211.164.101 is alive 00:00:08.373 [Pipeline] retry 00:00:08.376 [Pipeline] { 00:00:08.390 [Pipeline] httpRequest 00:00:08.396 HttpMethod: GET 00:00:08.397 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.397 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.398 Response Code: HTTP/1.1 200 OK 00:00:08.399 Success: Status code 200 is in the accepted range: 200,404 00:00:08.399 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.453 [Pipeline] } 00:00:09.472 [Pipeline] // retry 00:00:09.482 [Pipeline] sh 00:00:09.769 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.788 [Pipeline] httpRequest 00:00:10.742 [Pipeline] echo 00:00:10.744 Sorcerer 10.211.164.101 is alive 00:00:10.753 [Pipeline] retry 00:00:10.755 [Pipeline] { 00:00:10.769 [Pipeline] httpRequest 00:00:10.774 HttpMethod: GET 00:00:10.775 URL: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:10.775 Sending request to url: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:10.790 Response Code: HTTP/1.1 200 OK 00:00:10.791 Success: Status code 200 is in the accepted range: 200,404 00:00:10.792 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:15.885 [Pipeline] } 00:01:15.904 [Pipeline] // retry 00:01:15.911 [Pipeline] sh 00:01:16.196 + tar --no-same-owner -xf spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:19.508 [Pipeline] sh 00:01:19.794 + git -C spdk log --oneline -n5 00:01:19.794 bbce7a874 event: move struct spdk_lw_thread to internal header 00:01:19.794 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:19.794 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:19.794 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:19.794 412fced1b bdev/compress: unmap support. 00:01:19.813 [Pipeline] writeFile 00:01:19.829 [Pipeline] sh 00:01:20.115 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.127 [Pipeline] sh 00:01:20.412 + cat autorun-spdk.conf 00:01:20.412 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.412 SPDK_TEST_NVME=1 00:01:20.412 SPDK_TEST_FTL=1 00:01:20.412 SPDK_TEST_ISAL=1 00:01:20.412 SPDK_RUN_ASAN=1 00:01:20.412 SPDK_RUN_UBSAN=1 00:01:20.412 SPDK_TEST_XNVME=1 00:01:20.412 SPDK_TEST_NVME_FDP=1 00:01:20.412 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.421 RUN_NIGHTLY=1 00:01:20.423 [Pipeline] } 00:01:20.437 [Pipeline] // stage 00:01:20.451 [Pipeline] stage 00:01:20.453 [Pipeline] { (Run VM) 00:01:20.466 [Pipeline] sh 00:01:20.755 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.755 + echo 'Start stage prepare_nvme.sh' 00:01:20.755 Start stage prepare_nvme.sh 00:01:20.755 + [[ -n 2 ]] 00:01:20.755 + disk_prefix=ex2 00:01:20.755 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:20.755 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:20.755 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:20.755 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.755 ++ SPDK_TEST_NVME=1 00:01:20.755 ++ SPDK_TEST_FTL=1 00:01:20.755 ++ SPDK_TEST_ISAL=1 00:01:20.755 ++ SPDK_RUN_ASAN=1 00:01:20.755 ++ SPDK_RUN_UBSAN=1 00:01:20.755 ++ SPDK_TEST_XNVME=1 00:01:20.755 ++ SPDK_TEST_NVME_FDP=1 00:01:20.755 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.755 ++ RUN_NIGHTLY=1 00:01:20.755 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:20.755 + nvme_files=() 00:01:20.755 + declare -A nvme_files 00:01:20.755 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.755 + nvme_files['nvme.img']=5G 00:01:20.755 + nvme_files['nvme-cmb.img']=5G 00:01:20.755 + nvme_files['nvme-multi0.img']=4G 00:01:20.755 + nvme_files['nvme-multi1.img']=4G 00:01:20.755 + nvme_files['nvme-multi2.img']=4G 00:01:20.755 + nvme_files['nvme-openstack.img']=8G 00:01:20.755 + nvme_files['nvme-zns.img']=5G 00:01:20.755 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.755 + (( SPDK_TEST_FTL == 1 )) 00:01:20.755 + nvme_files["nvme-ftl.img"]=6G 00:01:20.755 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.755 + nvme_files["nvme-fdp.img"]=1G 00:01:20.755 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.755 + for nvme in "${!nvme_files[@]}" 00:01:20.755 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:21.016 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.016 + for nvme in "${!nvme_files[@]}" 00:01:21.016 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:21.955 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:21.955 + for nvme in "${!nvme_files[@]}" 00:01:21.955 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:21.955 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.955 + for nvme in "${!nvme_files[@]}" 00:01:21.955 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:21.955 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:21.955 + for nvme in "${!nvme_files[@]}" 00:01:21.955 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:21.955 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.955 + for nvme in "${!nvme_files[@]}" 00:01:21.955 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:22.215 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.215 + for nvme in "${!nvme_files[@]}" 00:01:22.215 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:22.784 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.784 + for nvme in "${!nvme_files[@]}" 00:01:22.784 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:22.784 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:22.784 + for nvme in "${!nvme_files[@]}" 00:01:22.784 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:23.724 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.724 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:23.724 + echo 'End stage prepare_nvme.sh' 00:01:23.724 End stage prepare_nvme.sh 00:01:23.737 [Pipeline] sh 00:01:24.021 + DISTRO=fedora39 00:01:24.021 + CPUS=10 00:01:24.021 + RAM=12288 00:01:24.021 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:24.021 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:24.021 00:01:24.021 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:24.021 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:24.021 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:24.021 HELP=0 00:01:24.021 DRY_RUN=0 00:01:24.021 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:24.021 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:24.021 NVME_AUTO_CREATE=0 00:01:24.021 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:24.021 NVME_CMB=,,,, 00:01:24.021 NVME_PMR=,,,, 00:01:24.021 NVME_ZNS=,,,, 00:01:24.021 NVME_MS=true,,,, 00:01:24.021 NVME_FDP=,,,on, 00:01:24.021 SPDK_VAGRANT_DISTRO=fedora39 00:01:24.021 SPDK_VAGRANT_VMCPU=10 00:01:24.021 SPDK_VAGRANT_VMRAM=12288 00:01:24.021 SPDK_VAGRANT_PROVIDER=libvirt 00:01:24.021 SPDK_VAGRANT_HTTP_PROXY= 00:01:24.021 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:24.022 SPDK_OPENSTACK_NETWORK=0 00:01:24.022 VAGRANT_PACKAGE_BOX=0 00:01:24.022 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:24.022 FORCE_DISTRO=true 00:01:24.022 VAGRANT_BOX_VERSION= 00:01:24.022 EXTRA_VAGRANTFILES= 00:01:24.022 NIC_MODEL=e1000 00:01:24.022 00:01:24.022 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:24.022 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:26.563 Bringing machine 'default' up with 'libvirt' provider... 00:01:26.824 ==> default: Creating image (snapshot of base box volume). 00:01:26.824 ==> default: Creating domain with the following settings... 00:01:26.824 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728791539_54836fea244e5092e9ac 00:01:26.824 ==> default: -- Domain type: kvm 00:01:26.824 ==> default: -- Cpus: 10 00:01:26.824 ==> default: -- Feature: acpi 00:01:26.824 ==> default: -- Feature: apic 00:01:26.824 ==> default: -- Feature: pae 00:01:26.824 ==> default: -- Memory: 12288M 00:01:26.824 ==> default: -- Memory Backing: hugepages: 00:01:26.824 ==> default: -- Management MAC: 00:01:26.824 ==> default: -- Loader: 00:01:26.824 ==> default: -- Nvram: 00:01:26.824 ==> default: -- Base box: spdk/fedora39 00:01:26.824 ==> default: -- Storage pool: default 00:01:26.824 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728791539_54836fea244e5092e9ac.img (20G) 00:01:26.824 ==> default: -- Volume Cache: default 00:01:26.824 ==> default: -- Kernel: 00:01:26.824 ==> default: -- Initrd: 00:01:26.824 ==> default: -- Graphics Type: vnc 00:01:26.824 ==> default: -- Graphics Port: -1 00:01:26.824 ==> default: -- Graphics IP: 127.0.0.1 00:01:26.824 ==> default: -- Graphics Password: Not defined 00:01:26.824 ==> default: -- Video Type: cirrus 00:01:26.824 ==> default: -- Video VRAM: 9216 00:01:26.824 ==> default: -- Sound Type: 00:01:26.824 ==> default: -- Keymap: en-us 00:01:26.824 ==> default: -- TPM Path: 00:01:26.824 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:26.824 ==> default: -- Command line args: 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:26.824 ==> default: -> value=-drive, 00:01:26.824 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:26.824 ==> default: -> value=-drive, 00:01:26.824 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:26.824 ==> default: -> value=-drive, 00:01:26.824 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.824 ==> default: -> value=-drive, 00:01:26.824 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.824 ==> default: -> value=-drive, 00:01:26.824 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:26.824 ==> default: -> value=-device, 00:01:26.824 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:26.824 ==> default: -> value=-drive, 00:01:26.824 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:26.824 ==> default: -> value=-device, 00:01:26.825 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.825 ==> default: Creating shared folders metadata... 00:01:27.095 ==> default: Starting domain. 00:01:29.006 ==> default: Waiting for domain to get an IP address... 00:01:47.129 ==> default: Waiting for SSH to become available... 00:01:47.129 ==> default: Configuring and enabling network interfaces... 00:01:49.031 default: SSH address: 192.168.121.128:22 00:01:49.031 default: SSH username: vagrant 00:01:49.031 default: SSH auth method: private key 00:01:50.934 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:59.060 ==> default: Mounting SSHFS shared folder... 00:01:59.998 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.998 ==> default: Checking Mount.. 00:02:00.941 ==> default: Folder Successfully Mounted! 00:02:00.941 00:02:00.941 SUCCESS! 00:02:00.941 00:02:00.941 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.941 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.941 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.941 00:02:00.950 [Pipeline] } 00:02:00.965 [Pipeline] // stage 00:02:00.973 [Pipeline] dir 00:02:00.974 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:00.975 [Pipeline] { 00:02:00.987 [Pipeline] catchError 00:02:00.989 [Pipeline] { 00:02:01.001 [Pipeline] sh 00:02:01.284 + vagrant ssh-config --host vagrant 00:02:01.284 + sed -ne '/^Host/,$p' 00:02:01.284 + tee ssh_conf 00:02:04.594 Host vagrant 00:02:04.594 HostName 192.168.121.128 00:02:04.594 User vagrant 00:02:04.594 Port 22 00:02:04.594 UserKnownHostsFile /dev/null 00:02:04.594 StrictHostKeyChecking no 00:02:04.594 PasswordAuthentication no 00:02:04.594 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.594 IdentitiesOnly yes 00:02:04.594 LogLevel FATAL 00:02:04.594 ForwardAgent yes 00:02:04.594 ForwardX11 yes 00:02:04.594 00:02:04.607 [Pipeline] withEnv 00:02:04.609 [Pipeline] { 00:02:04.621 [Pipeline] sh 00:02:04.905 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:04.905 source /etc/os-release 00:02:04.905 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.905 # Minimal, systemd-like check. 00:02:04.905 if [[ -e /.dockerenv ]]; then 00:02:04.905 # Clear garbage from the node'\''s name: 00:02:04.905 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.905 # $HOSTNAME is the actual container id 00:02:04.905 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.905 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.905 # We can assume this is a mount from a host where container is running, 00:02:04.905 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.905 container="$(< /etc/hostname) ($agent)" 00:02:04.905 else 00:02:04.905 # Fallback 00:02:04.905 container=$agent 00:02:04.905 fi 00:02:04.905 fi 00:02:04.905 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.905 ' 00:02:05.180 [Pipeline] } 00:02:05.197 [Pipeline] // withEnv 00:02:05.205 [Pipeline] setCustomBuildProperty 00:02:05.218 [Pipeline] stage 00:02:05.220 [Pipeline] { (Tests) 00:02:05.238 [Pipeline] sh 00:02:05.521 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.797 [Pipeline] sh 00:02:06.122 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.138 [Pipeline] timeout 00:02:06.138 Timeout set to expire in 50 min 00:02:06.140 [Pipeline] { 00:02:06.155 [Pipeline] sh 00:02:06.438 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:07.009 HEAD is now at bbce7a874 event: move struct spdk_lw_thread to internal header 00:02:07.023 [Pipeline] sh 00:02:07.307 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:07.581 [Pipeline] sh 00:02:07.864 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.141 [Pipeline] sh 00:02:08.425 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:08.687 ++ readlink -f spdk_repo 00:02:08.687 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.687 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.687 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.687 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.687 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.687 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.687 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.687 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:08.687 + cd /home/vagrant/spdk_repo 00:02:08.687 + source /etc/os-release 00:02:08.687 ++ NAME='Fedora Linux' 00:02:08.687 ++ VERSION='39 (Cloud Edition)' 00:02:08.687 ++ ID=fedora 00:02:08.687 ++ VERSION_ID=39 00:02:08.687 ++ VERSION_CODENAME= 00:02:08.687 ++ PLATFORM_ID=platform:f39 00:02:08.687 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.687 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.687 ++ LOGO=fedora-logo-icon 00:02:08.687 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.687 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.687 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.687 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.687 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.687 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.687 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.687 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.687 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.687 ++ SUPPORT_END=2024-11-12 00:02:08.687 ++ VARIANT='Cloud Edition' 00:02:08.687 ++ VARIANT_ID=cloud 00:02:08.687 + uname -a 00:02:08.687 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.687 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.209 Hugepages 00:02:09.209 node hugesize free / total 00:02:09.209 node0 1048576kB 0 / 0 00:02:09.209 node0 2048kB 0 / 0 00:02:09.209 00:02:09.209 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.209 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.472 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.472 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:09.472 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:09.472 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:09.472 + rm -f /tmp/spdk-ld-path 00:02:09.472 + source autorun-spdk.conf 00:02:09.472 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.472 ++ SPDK_TEST_NVME=1 00:02:09.472 ++ SPDK_TEST_FTL=1 00:02:09.472 ++ SPDK_TEST_ISAL=1 00:02:09.472 ++ SPDK_RUN_ASAN=1 00:02:09.472 ++ SPDK_RUN_UBSAN=1 00:02:09.472 ++ SPDK_TEST_XNVME=1 00:02:09.472 ++ SPDK_TEST_NVME_FDP=1 00:02:09.472 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.472 ++ RUN_NIGHTLY=1 00:02:09.472 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.472 + [[ -n '' ]] 00:02:09.472 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.472 + for M in /var/spdk/build-*-manifest.txt 00:02:09.472 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.472 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.472 + for M in /var/spdk/build-*-manifest.txt 00:02:09.472 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.472 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.472 + for M in /var/spdk/build-*-manifest.txt 00:02:09.472 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.472 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.472 ++ uname 00:02:09.472 + [[ Linux == \L\i\n\u\x ]] 00:02:09.472 + sudo dmesg -T 00:02:09.472 + sudo dmesg --clear 00:02:09.472 + dmesg_pid=5027 00:02:09.472 + [[ Fedora Linux == FreeBSD ]] 00:02:09.472 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.472 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.472 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.472 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.472 + sudo dmesg -Tw 00:02:09.472 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.472 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.472 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.472 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.472 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.472 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.472 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.472 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.472 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.472 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.472 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.472 Test configuration: 00:02:09.472 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.472 SPDK_TEST_NVME=1 00:02:09.472 SPDK_TEST_FTL=1 00:02:09.472 SPDK_TEST_ISAL=1 00:02:09.472 SPDK_RUN_ASAN=1 00:02:09.472 SPDK_RUN_UBSAN=1 00:02:09.472 SPDK_TEST_XNVME=1 00:02:09.472 SPDK_TEST_NVME_FDP=1 00:02:09.472 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.734 RUN_NIGHTLY=1 03:53:02 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:09.734 03:53:02 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.734 03:53:02 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:09.734 03:53:02 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.734 03:53:02 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.734 03:53:02 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.734 03:53:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.734 03:53:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.734 03:53:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.734 03:53:02 -- paths/export.sh@5 -- $ export PATH 00:02:09.734 03:53:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.734 03:53:02 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.734 03:53:02 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:09.734 03:53:02 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728791582.XXXXXX 00:02:09.735 03:53:02 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728791582.YzTFjn 00:02:09.735 03:53:02 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:09.735 03:53:02 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:09.735 03:53:02 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.735 03:53:02 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.735 03:53:02 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.735 03:53:02 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:09.735 03:53:02 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:09.735 03:53:02 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.735 03:53:02 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:09.735 03:53:02 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:09.735 03:53:02 -- pm/common@17 -- $ local monitor 00:02:09.735 03:53:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.735 03:53:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.735 03:53:02 -- pm/common@25 -- $ sleep 1 00:02:09.735 03:53:02 -- pm/common@21 -- $ date +%s 00:02:09.735 03:53:02 -- pm/common@21 -- $ date +%s 00:02:09.735 03:53:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728791582 00:02:09.735 03:53:02 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728791582 00:02:09.735 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728791582_collect-cpu-load.pm.log 00:02:09.735 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728791582_collect-vmstat.pm.log 00:02:10.679 03:53:03 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:10.679 03:53:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.679 03:53:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.679 03:53:03 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.679 03:53:03 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.679 Sun Oct 13 03:53:03 AM UTC 2024 00:02:10.679 03:53:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.679 v25.01-pre-55-gbbce7a874 00:02:10.679 03:53:03 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:10.679 03:53:03 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:10.679 03:53:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:10.679 03:53:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:10.679 03:53:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.679 ************************************ 00:02:10.679 START TEST asan 00:02:10.679 ************************************ 00:02:10.679 using asan 00:02:10.679 03:53:03 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:10.679 00:02:10.679 real 0m0.000s 00:02:10.679 user 0m0.000s 00:02:10.679 sys 0m0.000s 00:02:10.679 03:53:03 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:10.679 03:53:03 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.679 ************************************ 00:02:10.679 END TEST asan 00:02:10.679 ************************************ 00:02:10.679 03:53:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.679 03:53:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.679 03:53:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:10.679 03:53:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:10.679 03:53:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.679 ************************************ 00:02:10.679 START TEST ubsan 00:02:10.679 ************************************ 00:02:10.679 using ubsan 00:02:10.679 03:53:03 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:10.679 00:02:10.679 real 0m0.000s 00:02:10.679 user 0m0.000s 00:02:10.679 sys 0m0.000s 00:02:10.679 ************************************ 00:02:10.679 END TEST ubsan 00:02:10.679 ************************************ 00:02:10.679 03:53:03 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:10.679 03:53:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.939 03:53:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.939 03:53:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.939 03:53:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.939 03:53:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.939 03:53:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.939 03:53:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.939 03:53:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.939 03:53:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.939 03:53:03 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:10.939 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:10.939 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.510 Using 'verbs' RDMA provider 00:02:24.684 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:34.671 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:34.671 Creating mk/config.mk...done. 00:02:34.671 Creating mk/cc.flags.mk...done. 00:02:34.671 Type 'make' to build. 00:02:34.671 03:53:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:34.671 03:53:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:34.671 03:53:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:34.671 03:53:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.671 ************************************ 00:02:34.671 START TEST make 00:02:34.671 ************************************ 00:02:34.671 03:53:26 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:34.671 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:34.671 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:34.671 meson setup builddir \ 00:02:34.671 -Dwith-libaio=enabled \ 00:02:34.671 -Dwith-liburing=enabled \ 00:02:34.671 -Dwith-libvfn=disabled \ 00:02:34.671 -Dwith-spdk=false && \ 00:02:34.671 meson compile -C builddir && \ 00:02:34.671 cd -) 00:02:34.671 make[1]: Nothing to be done for 'all'. 00:02:36.042 The Meson build system 00:02:36.042 Version: 1.5.0 00:02:36.042 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:36.042 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:36.042 Build type: native build 00:02:36.042 Project name: xnvme 00:02:36.042 Project version: 0.7.3 00:02:36.042 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.042 C linker for the host machine: cc ld.bfd 2.40-14 00:02:36.042 Host machine cpu family: x86_64 00:02:36.042 Host machine cpu: x86_64 00:02:36.042 Message: host_machine.system: linux 00:02:36.042 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:36.042 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:36.042 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:36.042 Run-time dependency threads found: YES 00:02:36.042 Has header "setupapi.h" : NO 00:02:36.042 Has header "linux/blkzoned.h" : YES 00:02:36.042 Has header "linux/blkzoned.h" : YES (cached) 00:02:36.042 Has header "libaio.h" : YES 00:02:36.042 Library aio found: YES 00:02:36.042 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.042 Run-time dependency liburing found: YES 2.2 00:02:36.042 Dependency libvfn skipped: feature with-libvfn disabled 00:02:36.042 Run-time dependency appleframeworks found: NO (tried framework) 00:02:36.042 Run-time dependency appleframeworks found: NO (tried framework) 00:02:36.042 Configuring xnvme_config.h using configuration 00:02:36.042 Configuring xnvme.spec using configuration 00:02:36.042 Run-time dependency bash-completion found: YES 2.11 00:02:36.042 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:36.042 Program cp found: YES (/usr/bin/cp) 00:02:36.042 Has header "winsock2.h" : NO 00:02:36.042 Has header "dbghelp.h" : NO 00:02:36.042 Library rpcrt4 found: NO 00:02:36.042 Library rt found: YES 00:02:36.042 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:36.042 Found CMake: /usr/bin/cmake (3.27.7) 00:02:36.042 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:36.042 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:36.042 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:36.042 Build targets in project: 32 00:02:36.042 00:02:36.042 xnvme 0.7.3 00:02:36.042 00:02:36.042 User defined options 00:02:36.042 with-libaio : enabled 00:02:36.042 with-liburing: enabled 00:02:36.042 with-libvfn : disabled 00:02:36.042 with-spdk : false 00:02:36.042 00:02:36.042 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.607 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:36.607 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:36.607 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:36.607 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:36.607 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:36.607 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:36.607 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:36.607 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:36.607 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:36.607 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:36.865 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:36.865 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:36.865 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:36.865 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:36.865 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:36.865 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:36.865 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:36.865 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:36.865 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:36.865 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:36.865 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:36.865 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:36.865 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:36.865 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:36.865 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:36.865 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:36.865 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:36.865 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:36.865 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:36.865 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:36.865 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:36.865 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:36.865 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:36.865 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:36.865 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:36.865 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:36.865 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:36.865 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:37.123 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:37.123 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:37.123 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:37.123 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:37.123 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:37.123 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:37.123 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:37.123 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:37.123 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:37.123 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:37.123 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:37.123 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:37.123 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:37.123 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:37.123 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:37.123 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:37.123 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:37.123 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:37.123 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:37.123 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:37.123 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:37.123 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:37.123 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:37.123 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:37.123 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:37.123 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:37.123 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:37.123 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:37.123 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:37.123 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:37.380 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:37.380 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:37.380 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:37.380 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:37.380 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:37.380 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:37.380 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:37.380 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:37.380 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:37.380 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:37.380 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:37.380 [79/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:37.380 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:37.380 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:37.380 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:37.380 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:37.380 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:37.380 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:37.380 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:37.637 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:37.637 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:37.637 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:37.637 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:37.637 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:37.637 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:37.637 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:37.638 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:37.638 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:37.638 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:37.638 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:37.638 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:37.638 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:37.638 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:37.638 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:37.638 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:37.638 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:37.638 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:37.638 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:37.638 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:37.638 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:37.638 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:37.638 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:37.638 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:37.638 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:37.638 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:37.638 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:37.638 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:37.638 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:37.638 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:37.638 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:37.638 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:37.638 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:37.638 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:37.638 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:37.638 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:37.895 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:37.895 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:37.895 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:37.895 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:37.895 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:37.895 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:37.895 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:37.895 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:37.895 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:37.895 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:37.895 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:37.895 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:37.895 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:37.895 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:37.895 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:37.895 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:37.895 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:37.895 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:37.895 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:37.895 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:38.153 [143/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:38.153 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:38.153 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:38.153 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:38.153 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:38.153 [148/203] Linking target lib/libxnvme.so 00:02:38.153 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:38.153 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:38.153 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:38.153 [152/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:38.153 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:38.153 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:38.153 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:38.153 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:38.153 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:38.153 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:38.153 [159/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:38.153 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:38.411 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:38.411 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:38.411 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:38.411 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:38.411 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:38.411 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:38.411 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:38.411 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:38.411 [169/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:38.411 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:38.411 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:38.411 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:38.669 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:38.669 [174/203] Linking static target lib/libxnvme.a 00:02:38.669 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:38.669 [176/203] Linking target tests/xnvme_tests_buf 00:02:38.669 [177/203] Linking target tests/xnvme_tests_enum 00:02:38.669 [178/203] Linking target tests/xnvme_tests_cli 00:02:38.669 [179/203] Linking target tests/xnvme_tests_scc 00:02:38.669 [180/203] Linking target tests/xnvme_tests_ioworker 00:02:38.669 [181/203] Linking target tests/xnvme_tests_znd_append 00:02:38.669 [182/203] Linking target tests/xnvme_tests_lblk 00:02:38.669 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:02:38.669 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:38.669 [185/203] Linking target tests/xnvme_tests_kvs 00:02:38.669 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:38.669 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:38.669 [188/203] Linking target tests/xnvme_tests_znd_state 00:02:38.669 [189/203] Linking target tests/xnvme_tests_map 00:02:38.669 [190/203] Linking target tools/lblk 00:02:38.669 [191/203] Linking target tools/xnvme 00:02:38.669 [192/203] Linking target tools/xdd 00:02:38.669 [193/203] Linking target tools/zoned 00:02:38.669 [194/203] Linking target tools/xnvme_file 00:02:38.669 [195/203] Linking target examples/xnvme_enum 00:02:38.669 [196/203] Linking target examples/xnvme_io_async 00:02:38.669 [197/203] Linking target examples/xnvme_hello 00:02:38.669 [198/203] Linking target examples/xnvme_dev 00:02:38.669 [199/203] Linking target examples/xnvme_single_async 00:02:38.669 [200/203] Linking target tools/kvs 00:02:38.669 [201/203] Linking target examples/zoned_io_sync 00:02:38.669 [202/203] Linking target examples/zoned_io_async 00:02:38.669 [203/203] Linking target examples/xnvme_single_sync 00:02:38.669 INFO: autodetecting backend as ninja 00:02:38.669 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:38.927 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:44.191 The Meson build system 00:02:44.191 Version: 1.5.0 00:02:44.191 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:44.191 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:44.191 Build type: native build 00:02:44.191 Program cat found: YES (/usr/bin/cat) 00:02:44.191 Project name: DPDK 00:02:44.191 Project version: 24.03.0 00:02:44.191 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:44.191 C linker for the host machine: cc ld.bfd 2.40-14 00:02:44.191 Host machine cpu family: x86_64 00:02:44.191 Host machine cpu: x86_64 00:02:44.191 Message: ## Building in Developer Mode ## 00:02:44.191 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:44.191 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:44.191 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:44.191 Program python3 found: YES (/usr/bin/python3) 00:02:44.191 Program cat found: YES (/usr/bin/cat) 00:02:44.191 Compiler for C supports arguments -march=native: YES 00:02:44.191 Checking for size of "void *" : 8 00:02:44.191 Checking for size of "void *" : 8 (cached) 00:02:44.191 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:44.191 Library m found: YES 00:02:44.191 Library numa found: YES 00:02:44.191 Has header "numaif.h" : YES 00:02:44.191 Library fdt found: NO 00:02:44.191 Library execinfo found: NO 00:02:44.191 Has header "execinfo.h" : YES 00:02:44.191 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:44.191 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:44.191 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:44.191 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:44.191 Run-time dependency openssl found: YES 3.1.1 00:02:44.191 Run-time dependency libpcap found: YES 1.10.4 00:02:44.191 Has header "pcap.h" with dependency libpcap: YES 00:02:44.191 Compiler for C supports arguments -Wcast-qual: YES 00:02:44.191 Compiler for C supports arguments -Wdeprecated: YES 00:02:44.191 Compiler for C supports arguments -Wformat: YES 00:02:44.191 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:44.191 Compiler for C supports arguments -Wformat-security: NO 00:02:44.191 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.191 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:44.191 Compiler for C supports arguments -Wnested-externs: YES 00:02:44.191 Compiler for C supports arguments -Wold-style-definition: YES 00:02:44.191 Compiler for C supports arguments -Wpointer-arith: YES 00:02:44.191 Compiler for C supports arguments -Wsign-compare: YES 00:02:44.191 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:44.191 Compiler for C supports arguments -Wundef: YES 00:02:44.191 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.191 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:44.191 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:44.191 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.191 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:44.191 Program objdump found: YES (/usr/bin/objdump) 00:02:44.191 Compiler for C supports arguments -mavx512f: YES 00:02:44.191 Checking if "AVX512 checking" compiles: YES 00:02:44.191 Fetching value of define "__SSE4_2__" : 1 00:02:44.191 Fetching value of define "__AES__" : 1 00:02:44.191 Fetching value of define "__AVX__" : 1 00:02:44.191 Fetching value of define "__AVX2__" : 1 00:02:44.191 Fetching value of define "__AVX512BW__" : 1 00:02:44.191 Fetching value of define "__AVX512CD__" : 1 00:02:44.191 Fetching value of define "__AVX512DQ__" : 1 00:02:44.191 Fetching value of define "__AVX512F__" : 1 00:02:44.191 Fetching value of define "__AVX512VL__" : 1 00:02:44.191 Fetching value of define "__PCLMUL__" : 1 00:02:44.191 Fetching value of define "__RDRND__" : 1 00:02:44.191 Fetching value of define "__RDSEED__" : 1 00:02:44.191 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:44.191 Fetching value of define "__znver1__" : (undefined) 00:02:44.191 Fetching value of define "__znver2__" : (undefined) 00:02:44.191 Fetching value of define "__znver3__" : (undefined) 00:02:44.191 Fetching value of define "__znver4__" : (undefined) 00:02:44.191 Library asan found: YES 00:02:44.191 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:44.191 Message: lib/log: Defining dependency "log" 00:02:44.191 Message: lib/kvargs: Defining dependency "kvargs" 00:02:44.191 Message: lib/telemetry: Defining dependency "telemetry" 00:02:44.191 Library rt found: YES 00:02:44.191 Checking for function "getentropy" : NO 00:02:44.191 Message: lib/eal: Defining dependency "eal" 00:02:44.191 Message: lib/ring: Defining dependency "ring" 00:02:44.191 Message: lib/rcu: Defining dependency "rcu" 00:02:44.191 Message: lib/mempool: Defining dependency "mempool" 00:02:44.191 Message: lib/mbuf: Defining dependency "mbuf" 00:02:44.191 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:44.191 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.191 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:44.191 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:44.191 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:44.191 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:44.191 Compiler for C supports arguments -mpclmul: YES 00:02:44.191 Compiler for C supports arguments -maes: YES 00:02:44.191 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.191 Compiler for C supports arguments -mavx512bw: YES 00:02:44.191 Compiler for C supports arguments -mavx512dq: YES 00:02:44.191 Compiler for C supports arguments -mavx512vl: YES 00:02:44.191 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:44.191 Compiler for C supports arguments -mavx2: YES 00:02:44.191 Compiler for C supports arguments -mavx: YES 00:02:44.191 Message: lib/net: Defining dependency "net" 00:02:44.191 Message: lib/meter: Defining dependency "meter" 00:02:44.191 Message: lib/ethdev: Defining dependency "ethdev" 00:02:44.191 Message: lib/pci: Defining dependency "pci" 00:02:44.191 Message: lib/cmdline: Defining dependency "cmdline" 00:02:44.191 Message: lib/hash: Defining dependency "hash" 00:02:44.191 Message: lib/timer: Defining dependency "timer" 00:02:44.191 Message: lib/compressdev: Defining dependency "compressdev" 00:02:44.191 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:44.191 Message: lib/dmadev: Defining dependency "dmadev" 00:02:44.191 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:44.191 Message: lib/power: Defining dependency "power" 00:02:44.191 Message: lib/reorder: Defining dependency "reorder" 00:02:44.191 Message: lib/security: Defining dependency "security" 00:02:44.191 Has header "linux/userfaultfd.h" : YES 00:02:44.191 Has header "linux/vduse.h" : YES 00:02:44.191 Message: lib/vhost: Defining dependency "vhost" 00:02:44.191 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.191 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.191 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.191 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.191 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:44.191 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:44.191 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:44.191 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:44.191 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:44.191 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:44.191 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:44.191 Configuring doxy-api-html.conf using configuration 00:02:44.191 Configuring doxy-api-man.conf using configuration 00:02:44.191 Program mandb found: YES (/usr/bin/mandb) 00:02:44.191 Program sphinx-build found: NO 00:02:44.191 Configuring rte_build_config.h using configuration 00:02:44.191 Message: 00:02:44.191 ================= 00:02:44.191 Applications Enabled 00:02:44.191 ================= 00:02:44.191 00:02:44.191 apps: 00:02:44.191 00:02:44.191 00:02:44.191 Message: 00:02:44.191 ================= 00:02:44.191 Libraries Enabled 00:02:44.191 ================= 00:02:44.191 00:02:44.191 libs: 00:02:44.191 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.191 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:44.191 cryptodev, dmadev, power, reorder, security, vhost, 00:02:44.191 00:02:44.191 Message: 00:02:44.191 =============== 00:02:44.191 Drivers Enabled 00:02:44.191 =============== 00:02:44.191 00:02:44.191 common: 00:02:44.191 00:02:44.191 bus: 00:02:44.191 pci, vdev, 00:02:44.191 mempool: 00:02:44.191 ring, 00:02:44.191 dma: 00:02:44.191 00:02:44.191 net: 00:02:44.191 00:02:44.191 crypto: 00:02:44.191 00:02:44.191 compress: 00:02:44.191 00:02:44.191 vdpa: 00:02:44.191 00:02:44.191 00:02:44.191 Message: 00:02:44.191 ================= 00:02:44.191 Content Skipped 00:02:44.191 ================= 00:02:44.191 00:02:44.191 apps: 00:02:44.191 dumpcap: explicitly disabled via build config 00:02:44.191 graph: explicitly disabled via build config 00:02:44.191 pdump: explicitly disabled via build config 00:02:44.191 proc-info: explicitly disabled via build config 00:02:44.191 test-acl: explicitly disabled via build config 00:02:44.191 test-bbdev: explicitly disabled via build config 00:02:44.191 test-cmdline: explicitly disabled via build config 00:02:44.191 test-compress-perf: explicitly disabled via build config 00:02:44.192 test-crypto-perf: explicitly disabled via build config 00:02:44.192 test-dma-perf: explicitly disabled via build config 00:02:44.192 test-eventdev: explicitly disabled via build config 00:02:44.192 test-fib: explicitly disabled via build config 00:02:44.192 test-flow-perf: explicitly disabled via build config 00:02:44.192 test-gpudev: explicitly disabled via build config 00:02:44.192 test-mldev: explicitly disabled via build config 00:02:44.192 test-pipeline: explicitly disabled via build config 00:02:44.192 test-pmd: explicitly disabled via build config 00:02:44.192 test-regex: explicitly disabled via build config 00:02:44.192 test-sad: explicitly disabled via build config 00:02:44.192 test-security-perf: explicitly disabled via build config 00:02:44.192 00:02:44.192 libs: 00:02:44.192 argparse: explicitly disabled via build config 00:02:44.192 metrics: explicitly disabled via build config 00:02:44.192 acl: explicitly disabled via build config 00:02:44.192 bbdev: explicitly disabled via build config 00:02:44.192 bitratestats: explicitly disabled via build config 00:02:44.192 bpf: explicitly disabled via build config 00:02:44.192 cfgfile: explicitly disabled via build config 00:02:44.192 distributor: explicitly disabled via build config 00:02:44.192 efd: explicitly disabled via build config 00:02:44.192 eventdev: explicitly disabled via build config 00:02:44.192 dispatcher: explicitly disabled via build config 00:02:44.192 gpudev: explicitly disabled via build config 00:02:44.192 gro: explicitly disabled via build config 00:02:44.192 gso: explicitly disabled via build config 00:02:44.192 ip_frag: explicitly disabled via build config 00:02:44.192 jobstats: explicitly disabled via build config 00:02:44.192 latencystats: explicitly disabled via build config 00:02:44.192 lpm: explicitly disabled via build config 00:02:44.192 member: explicitly disabled via build config 00:02:44.192 pcapng: explicitly disabled via build config 00:02:44.192 rawdev: explicitly disabled via build config 00:02:44.192 regexdev: explicitly disabled via build config 00:02:44.192 mldev: explicitly disabled via build config 00:02:44.192 rib: explicitly disabled via build config 00:02:44.192 sched: explicitly disabled via build config 00:02:44.192 stack: explicitly disabled via build config 00:02:44.192 ipsec: explicitly disabled via build config 00:02:44.192 pdcp: explicitly disabled via build config 00:02:44.192 fib: explicitly disabled via build config 00:02:44.192 port: explicitly disabled via build config 00:02:44.192 pdump: explicitly disabled via build config 00:02:44.192 table: explicitly disabled via build config 00:02:44.192 pipeline: explicitly disabled via build config 00:02:44.192 graph: explicitly disabled via build config 00:02:44.192 node: explicitly disabled via build config 00:02:44.192 00:02:44.192 drivers: 00:02:44.192 common/cpt: not in enabled drivers build config 00:02:44.192 common/dpaax: not in enabled drivers build config 00:02:44.192 common/iavf: not in enabled drivers build config 00:02:44.192 common/idpf: not in enabled drivers build config 00:02:44.192 common/ionic: not in enabled drivers build config 00:02:44.192 common/mvep: not in enabled drivers build config 00:02:44.192 common/octeontx: not in enabled drivers build config 00:02:44.192 bus/auxiliary: not in enabled drivers build config 00:02:44.192 bus/cdx: not in enabled drivers build config 00:02:44.192 bus/dpaa: not in enabled drivers build config 00:02:44.192 bus/fslmc: not in enabled drivers build config 00:02:44.192 bus/ifpga: not in enabled drivers build config 00:02:44.192 bus/platform: not in enabled drivers build config 00:02:44.192 bus/uacce: not in enabled drivers build config 00:02:44.192 bus/vmbus: not in enabled drivers build config 00:02:44.192 common/cnxk: not in enabled drivers build config 00:02:44.192 common/mlx5: not in enabled drivers build config 00:02:44.192 common/nfp: not in enabled drivers build config 00:02:44.192 common/nitrox: not in enabled drivers build config 00:02:44.192 common/qat: not in enabled drivers build config 00:02:44.192 common/sfc_efx: not in enabled drivers build config 00:02:44.192 mempool/bucket: not in enabled drivers build config 00:02:44.192 mempool/cnxk: not in enabled drivers build config 00:02:44.192 mempool/dpaa: not in enabled drivers build config 00:02:44.192 mempool/dpaa2: not in enabled drivers build config 00:02:44.192 mempool/octeontx: not in enabled drivers build config 00:02:44.192 mempool/stack: not in enabled drivers build config 00:02:44.192 dma/cnxk: not in enabled drivers build config 00:02:44.192 dma/dpaa: not in enabled drivers build config 00:02:44.192 dma/dpaa2: not in enabled drivers build config 00:02:44.192 dma/hisilicon: not in enabled drivers build config 00:02:44.192 dma/idxd: not in enabled drivers build config 00:02:44.192 dma/ioat: not in enabled drivers build config 00:02:44.192 dma/skeleton: not in enabled drivers build config 00:02:44.192 net/af_packet: not in enabled drivers build config 00:02:44.192 net/af_xdp: not in enabled drivers build config 00:02:44.192 net/ark: not in enabled drivers build config 00:02:44.192 net/atlantic: not in enabled drivers build config 00:02:44.192 net/avp: not in enabled drivers build config 00:02:44.192 net/axgbe: not in enabled drivers build config 00:02:44.192 net/bnx2x: not in enabled drivers build config 00:02:44.192 net/bnxt: not in enabled drivers build config 00:02:44.192 net/bonding: not in enabled drivers build config 00:02:44.192 net/cnxk: not in enabled drivers build config 00:02:44.192 net/cpfl: not in enabled drivers build config 00:02:44.192 net/cxgbe: not in enabled drivers build config 00:02:44.192 net/dpaa: not in enabled drivers build config 00:02:44.192 net/dpaa2: not in enabled drivers build config 00:02:44.192 net/e1000: not in enabled drivers build config 00:02:44.192 net/ena: not in enabled drivers build config 00:02:44.192 net/enetc: not in enabled drivers build config 00:02:44.192 net/enetfec: not in enabled drivers build config 00:02:44.192 net/enic: not in enabled drivers build config 00:02:44.192 net/failsafe: not in enabled drivers build config 00:02:44.192 net/fm10k: not in enabled drivers build config 00:02:44.192 net/gve: not in enabled drivers build config 00:02:44.192 net/hinic: not in enabled drivers build config 00:02:44.192 net/hns3: not in enabled drivers build config 00:02:44.192 net/i40e: not in enabled drivers build config 00:02:44.192 net/iavf: not in enabled drivers build config 00:02:44.192 net/ice: not in enabled drivers build config 00:02:44.192 net/idpf: not in enabled drivers build config 00:02:44.192 net/igc: not in enabled drivers build config 00:02:44.192 net/ionic: not in enabled drivers build config 00:02:44.192 net/ipn3ke: not in enabled drivers build config 00:02:44.192 net/ixgbe: not in enabled drivers build config 00:02:44.192 net/mana: not in enabled drivers build config 00:02:44.192 net/memif: not in enabled drivers build config 00:02:44.192 net/mlx4: not in enabled drivers build config 00:02:44.192 net/mlx5: not in enabled drivers build config 00:02:44.192 net/mvneta: not in enabled drivers build config 00:02:44.192 net/mvpp2: not in enabled drivers build config 00:02:44.192 net/netvsc: not in enabled drivers build config 00:02:44.192 net/nfb: not in enabled drivers build config 00:02:44.192 net/nfp: not in enabled drivers build config 00:02:44.192 net/ngbe: not in enabled drivers build config 00:02:44.192 net/null: not in enabled drivers build config 00:02:44.192 net/octeontx: not in enabled drivers build config 00:02:44.192 net/octeon_ep: not in enabled drivers build config 00:02:44.192 net/pcap: not in enabled drivers build config 00:02:44.192 net/pfe: not in enabled drivers build config 00:02:44.192 net/qede: not in enabled drivers build config 00:02:44.192 net/ring: not in enabled drivers build config 00:02:44.192 net/sfc: not in enabled drivers build config 00:02:44.192 net/softnic: not in enabled drivers build config 00:02:44.192 net/tap: not in enabled drivers build config 00:02:44.192 net/thunderx: not in enabled drivers build config 00:02:44.192 net/txgbe: not in enabled drivers build config 00:02:44.192 net/vdev_netvsc: not in enabled drivers build config 00:02:44.192 net/vhost: not in enabled drivers build config 00:02:44.192 net/virtio: not in enabled drivers build config 00:02:44.192 net/vmxnet3: not in enabled drivers build config 00:02:44.192 raw/*: missing internal dependency, "rawdev" 00:02:44.192 crypto/armv8: not in enabled drivers build config 00:02:44.192 crypto/bcmfs: not in enabled drivers build config 00:02:44.192 crypto/caam_jr: not in enabled drivers build config 00:02:44.192 crypto/ccp: not in enabled drivers build config 00:02:44.192 crypto/cnxk: not in enabled drivers build config 00:02:44.192 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.192 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.192 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.192 crypto/mlx5: not in enabled drivers build config 00:02:44.192 crypto/mvsam: not in enabled drivers build config 00:02:44.192 crypto/nitrox: not in enabled drivers build config 00:02:44.192 crypto/null: not in enabled drivers build config 00:02:44.192 crypto/octeontx: not in enabled drivers build config 00:02:44.192 crypto/openssl: not in enabled drivers build config 00:02:44.192 crypto/scheduler: not in enabled drivers build config 00:02:44.192 crypto/uadk: not in enabled drivers build config 00:02:44.192 crypto/virtio: not in enabled drivers build config 00:02:44.192 compress/isal: not in enabled drivers build config 00:02:44.192 compress/mlx5: not in enabled drivers build config 00:02:44.192 compress/nitrox: not in enabled drivers build config 00:02:44.192 compress/octeontx: not in enabled drivers build config 00:02:44.192 compress/zlib: not in enabled drivers build config 00:02:44.192 regex/*: missing internal dependency, "regexdev" 00:02:44.192 ml/*: missing internal dependency, "mldev" 00:02:44.192 vdpa/ifc: not in enabled drivers build config 00:02:44.192 vdpa/mlx5: not in enabled drivers build config 00:02:44.192 vdpa/nfp: not in enabled drivers build config 00:02:44.192 vdpa/sfc: not in enabled drivers build config 00:02:44.192 event/*: missing internal dependency, "eventdev" 00:02:44.192 baseband/*: missing internal dependency, "bbdev" 00:02:44.192 gpu/*: missing internal dependency, "gpudev" 00:02:44.192 00:02:44.192 00:02:44.192 Build targets in project: 84 00:02:44.192 00:02:44.192 DPDK 24.03.0 00:02:44.192 00:02:44.192 User defined options 00:02:44.192 buildtype : debug 00:02:44.192 default_library : shared 00:02:44.192 libdir : lib 00:02:44.192 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:44.192 b_sanitize : address 00:02:44.192 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:44.192 c_link_args : 00:02:44.192 cpu_instruction_set: native 00:02:44.192 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:44.192 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:44.192 enable_docs : false 00:02:44.193 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:44.193 enable_kmods : false 00:02:44.193 max_lcores : 128 00:02:44.193 tests : false 00:02:44.193 00:02:44.193 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.193 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:44.450 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.450 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.450 [3/267] Linking static target lib/librte_kvargs.a 00:02:44.450 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.450 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.450 [6/267] Linking static target lib/librte_log.a 00:02:44.708 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.708 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.708 [9/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.708 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.708 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.708 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.708 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.965 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.965 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.965 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.965 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.965 [18/267] Linking static target lib/librte_telemetry.a 00:02:45.221 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.221 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.221 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.221 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.221 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.221 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.221 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.221 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.221 [27/267] Linking target lib/librte_log.so.24.1 00:02:45.479 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.479 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:45.479 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.479 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.479 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:45.737 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.737 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:45.737 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.737 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.737 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.737 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:45.737 [39/267] Linking target lib/librte_telemetry.so.24.1 00:02:45.737 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.737 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:45.737 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.737 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:45.737 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.995 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:45.995 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.995 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.995 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.995 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:46.286 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.286 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:46.286 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.286 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:46.286 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:46.286 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:46.286 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:46.286 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:46.286 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:46.286 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:46.544 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:46.544 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:46.544 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:46.544 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.544 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.803 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.803 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:46.803 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:46.803 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.803 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.803 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:46.803 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.803 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.803 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:47.061 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:47.061 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:47.061 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:47.061 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.061 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:47.061 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.319 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.319 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:47.319 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.319 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.576 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.576 [85/267] Linking static target lib/librte_ring.a 00:02:47.576 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:47.576 [87/267] Linking static target lib/librte_eal.a 00:02:47.577 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:47.577 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:47.577 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:47.577 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.577 [92/267] Linking static target lib/librte_mempool.a 00:02:47.577 [93/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.577 [94/267] Linking static target lib/librte_rcu.a 00:02:47.835 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.835 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:47.835 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.835 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:48.093 [99/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.093 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:48.093 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:48.093 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:48.351 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:48.351 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.351 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.351 [106/267] Linking static target lib/librte_meter.a 00:02:48.351 [107/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.351 [108/267] Linking static target lib/librte_net.a 00:02:48.609 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.609 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.609 [111/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.609 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.609 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.609 [114/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.609 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.609 [116/267] Linking static target lib/librte_mbuf.a 00:02:48.609 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.867 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.125 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.125 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.125 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.383 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.383 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.383 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.383 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.383 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.383 [127/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.383 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:49.383 [129/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.383 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.383 [131/267] Linking static target lib/librte_pci.a 00:02:49.641 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.641 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.641 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.641 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.641 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.641 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:49.641 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.641 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.641 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.641 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.641 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:49.641 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.641 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.641 [145/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.641 [146/267] Linking static target lib/librte_cmdline.a 00:02:49.899 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.899 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:49.899 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.899 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.158 [151/267] Linking static target lib/librte_timer.a 00:02:50.158 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:50.158 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.158 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.416 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.416 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.416 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.416 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.416 [159/267] Linking static target lib/librte_compressdev.a 00:02:50.416 [160/267] Linking static target lib/librte_ethdev.a 00:02:50.416 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.416 [162/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.416 [163/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.416 [164/267] Linking static target lib/librte_hash.a 00:02:50.674 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:50.674 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:50.674 [167/267] Linking static target lib/librte_dmadev.a 00:02:50.674 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:50.674 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:50.933 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:50.933 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:50.933 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:50.933 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.933 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.191 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:51.191 [176/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.191 [177/267] Linking static target lib/librte_cryptodev.a 00:02:51.191 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:51.191 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:51.191 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.191 [181/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.449 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:51.449 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:51.449 [184/267] Linking static target lib/librte_power.a 00:02:51.449 [185/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.707 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.707 [187/267] Linking static target lib/librte_reorder.a 00:02:51.707 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.707 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.707 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.707 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.707 [192/267] Linking static target lib/librte_security.a 00:02:52.273 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:52.273 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.273 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.531 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.531 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:52.531 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:52.531 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.531 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.789 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.789 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.789 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.048 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.048 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.048 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.048 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.048 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.048 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:53.048 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.305 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.305 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.305 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.305 [214/267] Linking static target drivers/librte_bus_vdev.a 00:02:53.305 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.305 [216/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.305 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.305 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.305 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.305 [220/267] Linking static target drivers/librte_bus_pci.a 00:02:53.564 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.564 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.564 [223/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.564 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.564 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:53.822 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.079 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.012 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.012 [229/267] Linking target lib/librte_eal.so.24.1 00:02:55.269 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:55.269 [231/267] Linking target lib/librte_pci.so.24.1 00:02:55.269 [232/267] Linking target lib/librte_ring.so.24.1 00:02:55.269 [233/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:55.269 [234/267] Linking target lib/librte_timer.so.24.1 00:02:55.269 [235/267] Linking target lib/librte_meter.so.24.1 00:02:55.269 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:55.269 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:55.269 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:55.269 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:55.269 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:55.269 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:55.269 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:55.269 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:55.526 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:55.526 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.526 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:55.526 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:55.526 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:55.526 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:55.783 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:55.783 [251/267] Linking target lib/librte_net.so.24.1 00:02:55.783 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:55.783 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:55.783 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:55.783 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:55.783 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:55.784 [257/267] Linking target lib/librte_security.so.24.1 00:02:55.784 [258/267] Linking target lib/librte_hash.so.24.1 00:02:56.041 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:56.041 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.041 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:56.300 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:56.300 [263/267] Linking target lib/librte_power.so.24.1 00:02:56.866 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.866 [265/267] Linking static target lib/librte_vhost.a 00:02:58.238 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.239 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:58.239 INFO: autodetecting backend as ninja 00:02:58.239 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.120 CC lib/ut/ut.o 00:03:13.120 CC lib/ut_mock/mock.o 00:03:13.120 CC lib/log/log.o 00:03:13.120 CC lib/log/log_flags.o 00:03:13.120 CC lib/log/log_deprecated.o 00:03:13.120 LIB libspdk_ut_mock.a 00:03:13.120 LIB libspdk_log.a 00:03:13.120 LIB libspdk_ut.a 00:03:13.120 SO libspdk_ut_mock.so.6.0 00:03:13.120 SO libspdk_ut.so.2.0 00:03:13.120 SO libspdk_log.so.7.1 00:03:13.120 SYMLINK libspdk_ut.so 00:03:13.120 SYMLINK libspdk_ut_mock.so 00:03:13.120 SYMLINK libspdk_log.so 00:03:13.120 CC lib/dma/dma.o 00:03:13.120 CC lib/ioat/ioat.o 00:03:13.120 CXX lib/trace_parser/trace.o 00:03:13.120 CC lib/util/base64.o 00:03:13.120 CC lib/util/bit_array.o 00:03:13.120 CC lib/util/cpuset.o 00:03:13.120 CC lib/util/crc32c.o 00:03:13.120 CC lib/util/crc16.o 00:03:13.120 CC lib/util/crc32.o 00:03:13.120 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.120 CC lib/util/crc32_ieee.o 00:03:13.120 CC lib/util/crc64.o 00:03:13.120 CC lib/util/dif.o 00:03:13.120 CC lib/util/fd.o 00:03:13.120 CC lib/util/fd_group.o 00:03:13.120 CC lib/util/file.o 00:03:13.120 LIB libspdk_dma.a 00:03:13.120 SO libspdk_dma.so.5.0 00:03:13.120 LIB libspdk_ioat.a 00:03:13.120 SO libspdk_ioat.so.7.0 00:03:13.120 SYMLINK libspdk_dma.so 00:03:13.120 CC lib/util/hexlify.o 00:03:13.120 CC lib/util/iov.o 00:03:13.120 CC lib/util/math.o 00:03:13.120 SYMLINK libspdk_ioat.so 00:03:13.120 CC lib/util/net.o 00:03:13.120 CC lib/util/pipe.o 00:03:13.120 CC lib/util/strerror_tls.o 00:03:13.120 CC lib/vfio_user/host/vfio_user.o 00:03:13.120 CC lib/util/string.o 00:03:13.120 CC lib/util/uuid.o 00:03:13.120 CC lib/util/xor.o 00:03:13.120 CC lib/util/zipf.o 00:03:13.120 CC lib/util/md5.o 00:03:13.120 LIB libspdk_vfio_user.a 00:03:13.120 SO libspdk_vfio_user.so.5.0 00:03:13.120 SYMLINK libspdk_vfio_user.so 00:03:13.120 LIB libspdk_util.a 00:03:13.120 SO libspdk_util.so.10.0 00:03:13.120 SYMLINK libspdk_util.so 00:03:13.120 LIB libspdk_trace_parser.a 00:03:13.120 SO libspdk_trace_parser.so.6.0 00:03:13.120 CC lib/vmd/vmd.o 00:03:13.120 CC lib/vmd/led.o 00:03:13.120 CC lib/rdma_utils/rdma_utils.o 00:03:13.120 CC lib/json/json_parse.o 00:03:13.120 CC lib/json/json_util.o 00:03:13.120 CC lib/rdma_provider/common.o 00:03:13.120 SYMLINK libspdk_trace_parser.so 00:03:13.120 CC lib/idxd/idxd.o 00:03:13.120 CC lib/json/json_write.o 00:03:13.120 CC lib/env_dpdk/env.o 00:03:13.120 CC lib/conf/conf.o 00:03:13.378 CC lib/env_dpdk/memory.o 00:03:13.378 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:13.378 CC lib/env_dpdk/pci.o 00:03:13.378 LIB libspdk_conf.a 00:03:13.378 CC lib/env_dpdk/init.o 00:03:13.378 SO libspdk_conf.so.6.0 00:03:13.378 LIB libspdk_rdma_utils.a 00:03:13.378 SO libspdk_rdma_utils.so.1.0 00:03:13.378 LIB libspdk_json.a 00:03:13.378 SYMLINK libspdk_conf.so 00:03:13.378 SO libspdk_json.so.6.0 00:03:13.378 CC lib/idxd/idxd_user.o 00:03:13.378 LIB libspdk_rdma_provider.a 00:03:13.378 SYMLINK libspdk_rdma_utils.so 00:03:13.378 CC lib/idxd/idxd_kernel.o 00:03:13.378 SO libspdk_rdma_provider.so.6.0 00:03:13.378 SYMLINK libspdk_json.so 00:03:13.378 CC lib/env_dpdk/threads.o 00:03:13.636 SYMLINK libspdk_rdma_provider.so 00:03:13.636 CC lib/env_dpdk/pci_ioat.o 00:03:13.636 CC lib/env_dpdk/pci_virtio.o 00:03:13.636 CC lib/jsonrpc/jsonrpc_server.o 00:03:13.636 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:13.636 CC lib/env_dpdk/pci_vmd.o 00:03:13.636 CC lib/env_dpdk/pci_idxd.o 00:03:13.636 CC lib/env_dpdk/pci_event.o 00:03:13.636 CC lib/env_dpdk/sigbus_handler.o 00:03:13.636 LIB libspdk_idxd.a 00:03:13.917 CC lib/env_dpdk/pci_dpdk.o 00:03:13.917 SO libspdk_idxd.so.12.1 00:03:13.917 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:13.917 LIB libspdk_vmd.a 00:03:13.917 CC lib/jsonrpc/jsonrpc_client.o 00:03:13.917 SO libspdk_vmd.so.6.0 00:03:13.917 SYMLINK libspdk_idxd.so 00:03:13.917 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:13.917 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:13.917 SYMLINK libspdk_vmd.so 00:03:14.197 LIB libspdk_jsonrpc.a 00:03:14.197 SO libspdk_jsonrpc.so.6.0 00:03:14.197 SYMLINK libspdk_jsonrpc.so 00:03:14.455 LIB libspdk_env_dpdk.a 00:03:14.455 CC lib/rpc/rpc.o 00:03:14.455 SO libspdk_env_dpdk.so.15.0 00:03:14.455 SYMLINK libspdk_env_dpdk.so 00:03:14.455 LIB libspdk_rpc.a 00:03:14.713 SO libspdk_rpc.so.6.0 00:03:14.713 SYMLINK libspdk_rpc.so 00:03:14.713 CC lib/trace/trace.o 00:03:14.713 CC lib/trace/trace_rpc.o 00:03:14.713 CC lib/trace/trace_flags.o 00:03:14.713 CC lib/notify/notify.o 00:03:14.713 CC lib/notify/notify_rpc.o 00:03:14.713 CC lib/keyring/keyring.o 00:03:14.713 CC lib/keyring/keyring_rpc.o 00:03:14.976 LIB libspdk_notify.a 00:03:14.976 SO libspdk_notify.so.6.0 00:03:14.976 SYMLINK libspdk_notify.so 00:03:14.976 LIB libspdk_keyring.a 00:03:14.976 LIB libspdk_trace.a 00:03:14.976 SO libspdk_keyring.so.2.0 00:03:14.976 SO libspdk_trace.so.11.0 00:03:15.233 SYMLINK libspdk_keyring.so 00:03:15.233 SYMLINK libspdk_trace.so 00:03:15.233 CC lib/sock/sock_rpc.o 00:03:15.233 CC lib/sock/sock.o 00:03:15.233 CC lib/thread/iobuf.o 00:03:15.233 CC lib/thread/thread.o 00:03:15.800 LIB libspdk_sock.a 00:03:15.800 SO libspdk_sock.so.10.0 00:03:15.800 SYMLINK libspdk_sock.so 00:03:16.058 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.058 CC lib/nvme/nvme_ctrlr.o 00:03:16.058 CC lib/nvme/nvme_ns.o 00:03:16.058 CC lib/nvme/nvme_ns_cmd.o 00:03:16.058 CC lib/nvme/nvme_pcie.o 00:03:16.058 CC lib/nvme/nvme_pcie_common.o 00:03:16.058 CC lib/nvme/nvme_fabric.o 00:03:16.058 CC lib/nvme/nvme_qpair.o 00:03:16.058 CC lib/nvme/nvme.o 00:03:16.623 CC lib/nvme/nvme_quirks.o 00:03:16.624 CC lib/nvme/nvme_transport.o 00:03:16.624 CC lib/nvme/nvme_discovery.o 00:03:16.624 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:16.624 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:16.881 CC lib/nvme/nvme_tcp.o 00:03:16.881 CC lib/nvme/nvme_opal.o 00:03:16.881 CC lib/nvme/nvme_io_msg.o 00:03:16.881 LIB libspdk_thread.a 00:03:16.881 SO libspdk_thread.so.10.2 00:03:16.881 SYMLINK libspdk_thread.so 00:03:16.881 CC lib/nvme/nvme_poll_group.o 00:03:17.139 CC lib/nvme/nvme_zns.o 00:03:17.139 CC lib/nvme/nvme_stubs.o 00:03:17.139 CC lib/nvme/nvme_auth.o 00:03:17.398 CC lib/accel/accel.o 00:03:17.398 CC lib/accel/accel_rpc.o 00:03:17.398 CC lib/accel/accel_sw.o 00:03:17.398 CC lib/nvme/nvme_cuse.o 00:03:17.398 CC lib/nvme/nvme_rdma.o 00:03:17.656 CC lib/blob/blobstore.o 00:03:17.656 CC lib/blob/request.o 00:03:17.656 CC lib/init/json_config.o 00:03:17.656 CC lib/init/subsystem.o 00:03:17.656 CC lib/virtio/virtio.o 00:03:17.914 CC lib/virtio/virtio_vhost_user.o 00:03:17.914 CC lib/init/subsystem_rpc.o 00:03:17.914 CC lib/init/rpc.o 00:03:17.914 CC lib/virtio/virtio_vfio_user.o 00:03:18.172 CC lib/virtio/virtio_pci.o 00:03:18.172 CC lib/blob/zeroes.o 00:03:18.172 LIB libspdk_init.a 00:03:18.172 CC lib/blob/blob_bs_dev.o 00:03:18.172 SO libspdk_init.so.6.0 00:03:18.172 CC lib/fsdev/fsdev.o 00:03:18.172 LIB libspdk_accel.a 00:03:18.172 CC lib/fsdev/fsdev_io.o 00:03:18.172 CC lib/fsdev/fsdev_rpc.o 00:03:18.172 SYMLINK libspdk_init.so 00:03:18.172 SO libspdk_accel.so.16.0 00:03:18.172 SYMLINK libspdk_accel.so 00:03:18.431 CC lib/event/reactor.o 00:03:18.431 CC lib/event/app.o 00:03:18.431 CC lib/event/log_rpc.o 00:03:18.431 LIB libspdk_virtio.a 00:03:18.431 CC lib/event/app_rpc.o 00:03:18.431 SO libspdk_virtio.so.7.0 00:03:18.431 CC lib/bdev/bdev.o 00:03:18.431 CC lib/event/scheduler_static.o 00:03:18.431 SYMLINK libspdk_virtio.so 00:03:18.431 CC lib/bdev/bdev_rpc.o 00:03:18.431 CC lib/bdev/bdev_zone.o 00:03:18.431 CC lib/bdev/part.o 00:03:18.431 CC lib/bdev/scsi_nvme.o 00:03:18.689 LIB libspdk_fsdev.a 00:03:18.689 SO libspdk_fsdev.so.1.0 00:03:18.689 SYMLINK libspdk_fsdev.so 00:03:18.689 LIB libspdk_event.a 00:03:18.947 SO libspdk_event.so.15.0 00:03:18.947 LIB libspdk_nvme.a 00:03:18.947 SYMLINK libspdk_event.so 00:03:18.947 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:18.947 SO libspdk_nvme.so.14.0 00:03:19.205 SYMLINK libspdk_nvme.so 00:03:19.470 LIB libspdk_fuse_dispatcher.a 00:03:19.732 SO libspdk_fuse_dispatcher.so.1.0 00:03:19.732 SYMLINK libspdk_fuse_dispatcher.so 00:03:20.665 LIB libspdk_blob.a 00:03:20.665 SO libspdk_blob.so.11.0 00:03:20.923 SYMLINK libspdk_blob.so 00:03:20.923 LIB libspdk_bdev.a 00:03:20.923 CC lib/blobfs/tree.o 00:03:20.923 CC lib/blobfs/blobfs.o 00:03:20.923 CC lib/lvol/lvol.o 00:03:20.923 SO libspdk_bdev.so.17.0 00:03:21.180 SYMLINK libspdk_bdev.so 00:03:21.180 CC lib/nbd/nbd.o 00:03:21.180 CC lib/nbd/nbd_rpc.o 00:03:21.180 CC lib/ftl/ftl_core.o 00:03:21.180 CC lib/ublk/ublk.o 00:03:21.180 CC lib/ftl/ftl_init.o 00:03:21.180 CC lib/nvmf/ctrlr.o 00:03:21.180 CC lib/ublk/ublk_rpc.o 00:03:21.180 CC lib/scsi/dev.o 00:03:21.438 CC lib/ftl/ftl_layout.o 00:03:21.438 CC lib/nvmf/ctrlr_discovery.o 00:03:21.438 CC lib/ftl/ftl_debug.o 00:03:21.438 CC lib/scsi/lun.o 00:03:21.438 LIB libspdk_nbd.a 00:03:21.696 SO libspdk_nbd.so.7.0 00:03:21.696 SYMLINK libspdk_nbd.so 00:03:21.696 CC lib/nvmf/ctrlr_bdev.o 00:03:21.696 CC lib/scsi/port.o 00:03:21.696 CC lib/ftl/ftl_io.o 00:03:21.696 CC lib/ftl/ftl_sb.o 00:03:21.696 CC lib/scsi/scsi.o 00:03:21.696 LIB libspdk_ublk.a 00:03:21.696 CC lib/ftl/ftl_l2p.o 00:03:21.696 SO libspdk_ublk.so.3.0 00:03:21.954 SYMLINK libspdk_ublk.so 00:03:21.954 CC lib/ftl/ftl_l2p_flat.o 00:03:21.954 CC lib/ftl/ftl_nv_cache.o 00:03:21.954 LIB libspdk_blobfs.a 00:03:21.954 CC lib/scsi/scsi_bdev.o 00:03:21.954 SO libspdk_blobfs.so.10.0 00:03:21.954 CC lib/ftl/ftl_band.o 00:03:21.954 CC lib/scsi/scsi_pr.o 00:03:21.954 SYMLINK libspdk_blobfs.so 00:03:21.954 CC lib/nvmf/subsystem.o 00:03:21.954 CC lib/nvmf/nvmf.o 00:03:21.954 LIB libspdk_lvol.a 00:03:21.954 SO libspdk_lvol.so.10.0 00:03:21.954 CC lib/nvmf/nvmf_rpc.o 00:03:21.954 SYMLINK libspdk_lvol.so 00:03:21.954 CC lib/nvmf/transport.o 00:03:22.211 CC lib/nvmf/tcp.o 00:03:22.211 CC lib/ftl/ftl_band_ops.o 00:03:22.211 CC lib/scsi/scsi_rpc.o 00:03:22.211 CC lib/scsi/task.o 00:03:22.468 CC lib/ftl/ftl_writer.o 00:03:22.468 LIB libspdk_scsi.a 00:03:22.468 SO libspdk_scsi.so.9.0 00:03:22.468 SYMLINK libspdk_scsi.so 00:03:22.468 CC lib/nvmf/stubs.o 00:03:22.468 CC lib/ftl/ftl_rq.o 00:03:22.468 CC lib/ftl/ftl_reloc.o 00:03:22.748 CC lib/nvmf/mdns_server.o 00:03:22.748 CC lib/nvmf/rdma.o 00:03:22.748 CC lib/ftl/ftl_l2p_cache.o 00:03:22.748 CC lib/nvmf/auth.o 00:03:23.006 CC lib/ftl/ftl_p2l.o 00:03:23.006 CC lib/ftl/ftl_p2l_log.o 00:03:23.006 CC lib/iscsi/conn.o 00:03:23.006 CC lib/vhost/vhost.o 00:03:23.006 CC lib/ftl/mngt/ftl_mngt.o 00:03:23.263 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.263 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.263 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.263 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.263 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.263 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.263 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:23.521 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:23.521 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:23.521 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:23.521 CC lib/iscsi/init_grp.o 00:03:23.521 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:23.521 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:23.521 CC lib/iscsi/iscsi.o 00:03:23.521 CC lib/vhost/vhost_rpc.o 00:03:23.521 CC lib/vhost/vhost_scsi.o 00:03:23.521 CC lib/ftl/utils/ftl_conf.o 00:03:23.778 CC lib/ftl/utils/ftl_md.o 00:03:23.778 CC lib/ftl/utils/ftl_mempool.o 00:03:23.778 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.778 CC lib/ftl/utils/ftl_property.o 00:03:23.778 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.778 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.778 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.036 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.036 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.036 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.036 CC lib/vhost/vhost_blk.o 00:03:24.036 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.036 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.036 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.036 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.036 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.036 CC lib/iscsi/param.o 00:03:24.294 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:24.294 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:24.294 CC lib/vhost/rte_vhost_user.o 00:03:24.294 CC lib/ftl/base/ftl_base_dev.o 00:03:24.294 CC lib/ftl/base/ftl_base_bdev.o 00:03:24.294 CC lib/ftl/ftl_trace.o 00:03:24.551 CC lib/iscsi/portal_grp.o 00:03:24.551 CC lib/iscsi/tgt_node.o 00:03:24.551 CC lib/iscsi/iscsi_subsystem.o 00:03:24.551 CC lib/iscsi/iscsi_rpc.o 00:03:24.551 CC lib/iscsi/task.o 00:03:24.551 LIB libspdk_ftl.a 00:03:24.809 SO libspdk_ftl.so.9.0 00:03:24.809 LIB libspdk_iscsi.a 00:03:24.809 LIB libspdk_nvmf.a 00:03:24.809 SO libspdk_iscsi.so.8.0 00:03:25.066 SYMLINK libspdk_ftl.so 00:03:25.066 SO libspdk_nvmf.so.19.0 00:03:25.066 SYMLINK libspdk_iscsi.so 00:03:25.323 SYMLINK libspdk_nvmf.so 00:03:25.323 LIB libspdk_vhost.a 00:03:25.323 SO libspdk_vhost.so.8.0 00:03:25.323 SYMLINK libspdk_vhost.so 00:03:25.580 CC module/env_dpdk/env_dpdk_rpc.o 00:03:25.580 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:25.580 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.580 CC module/keyring/file/keyring.o 00:03:25.580 CC module/blob/bdev/blob_bdev.o 00:03:25.580 CC module/fsdev/aio/fsdev_aio.o 00:03:25.580 CC module/sock/posix/posix.o 00:03:25.835 CC module/keyring/linux/keyring.o 00:03:25.835 CC module/accel/error/accel_error.o 00:03:25.835 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:25.835 LIB libspdk_env_dpdk_rpc.a 00:03:25.835 SO libspdk_env_dpdk_rpc.so.6.0 00:03:25.835 SYMLINK libspdk_env_dpdk_rpc.so 00:03:25.835 CC module/keyring/linux/keyring_rpc.o 00:03:25.835 CC module/accel/error/accel_error_rpc.o 00:03:25.835 LIB libspdk_scheduler_dpdk_governor.a 00:03:25.835 LIB libspdk_scheduler_gscheduler.a 00:03:25.835 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:25.835 SO libspdk_scheduler_gscheduler.so.4.0 00:03:25.835 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:25.835 LIB libspdk_scheduler_dynamic.a 00:03:25.835 SYMLINK libspdk_scheduler_gscheduler.so 00:03:25.835 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:25.835 CC module/fsdev/aio/linux_aio_mgr.o 00:03:25.835 SO libspdk_scheduler_dynamic.so.4.0 00:03:25.835 CC module/keyring/file/keyring_rpc.o 00:03:25.835 LIB libspdk_accel_error.a 00:03:25.835 LIB libspdk_keyring_linux.a 00:03:25.835 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.835 SO libspdk_keyring_linux.so.1.0 00:03:25.835 SO libspdk_accel_error.so.2.0 00:03:26.093 LIB libspdk_blob_bdev.a 00:03:26.093 SO libspdk_blob_bdev.so.11.0 00:03:26.093 SYMLINK libspdk_keyring_linux.so 00:03:26.093 SYMLINK libspdk_accel_error.so 00:03:26.093 LIB libspdk_keyring_file.a 00:03:26.093 SYMLINK libspdk_blob_bdev.so 00:03:26.093 SO libspdk_keyring_file.so.2.0 00:03:26.093 CC module/accel/ioat/accel_ioat.o 00:03:26.093 CC module/accel/ioat/accel_ioat_rpc.o 00:03:26.093 CC module/accel/iaa/accel_iaa.o 00:03:26.093 CC module/accel/iaa/accel_iaa_rpc.o 00:03:26.093 SYMLINK libspdk_keyring_file.so 00:03:26.093 CC module/accel/dsa/accel_dsa.o 00:03:26.093 CC module/accel/dsa/accel_dsa_rpc.o 00:03:26.351 LIB libspdk_fsdev_aio.a 00:03:26.351 LIB libspdk_accel_ioat.a 00:03:26.351 CC module/blobfs/bdev/blobfs_bdev.o 00:03:26.351 SO libspdk_fsdev_aio.so.1.0 00:03:26.351 LIB libspdk_accel_iaa.a 00:03:26.351 SO libspdk_accel_ioat.so.6.0 00:03:26.351 CC module/bdev/delay/vbdev_delay.o 00:03:26.351 SO libspdk_accel_iaa.so.3.0 00:03:26.351 CC module/bdev/error/vbdev_error.o 00:03:26.351 SYMLINK libspdk_accel_ioat.so 00:03:26.351 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:26.351 CC module/bdev/gpt/gpt.o 00:03:26.351 SYMLINK libspdk_fsdev_aio.so 00:03:26.351 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:26.351 SYMLINK libspdk_accel_iaa.so 00:03:26.351 CC module/bdev/gpt/vbdev_gpt.o 00:03:26.351 LIB libspdk_accel_dsa.a 00:03:26.351 CC module/bdev/lvol/vbdev_lvol.o 00:03:26.351 SO libspdk_accel_dsa.so.5.0 00:03:26.351 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:26.351 LIB libspdk_sock_posix.a 00:03:26.351 SYMLINK libspdk_accel_dsa.so 00:03:26.609 SO libspdk_sock_posix.so.6.0 00:03:26.609 LIB libspdk_blobfs_bdev.a 00:03:26.609 SO libspdk_blobfs_bdev.so.6.0 00:03:26.609 SYMLINK libspdk_sock_posix.so 00:03:26.609 CC module/bdev/error/vbdev_error_rpc.o 00:03:26.609 SYMLINK libspdk_blobfs_bdev.so 00:03:26.609 CC module/bdev/malloc/bdev_malloc.o 00:03:26.609 LIB libspdk_bdev_gpt.a 00:03:26.609 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:26.609 SO libspdk_bdev_gpt.so.6.0 00:03:26.609 CC module/bdev/null/bdev_null.o 00:03:26.609 CC module/bdev/nvme/bdev_nvme.o 00:03:26.609 LIB libspdk_bdev_delay.a 00:03:26.609 SYMLINK libspdk_bdev_gpt.so 00:03:26.609 LIB libspdk_bdev_error.a 00:03:26.609 SO libspdk_bdev_delay.so.6.0 00:03:26.609 CC module/bdev/passthru/vbdev_passthru.o 00:03:26.609 SO libspdk_bdev_error.so.6.0 00:03:26.609 SYMLINK libspdk_bdev_delay.so 00:03:26.867 SYMLINK libspdk_bdev_error.so 00:03:26.867 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:26.867 CC module/bdev/nvme/nvme_rpc.o 00:03:26.867 CC module/bdev/raid/bdev_raid.o 00:03:26.867 CC module/bdev/null/bdev_null_rpc.o 00:03:26.867 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:26.867 CC module/bdev/split/vbdev_split.o 00:03:26.867 LIB libspdk_bdev_lvol.a 00:03:26.867 SO libspdk_bdev_lvol.so.6.0 00:03:26.867 CC module/bdev/nvme/bdev_mdns_client.o 00:03:26.867 LIB libspdk_bdev_null.a 00:03:26.867 SYMLINK libspdk_bdev_lvol.so 00:03:26.867 LIB libspdk_bdev_malloc.a 00:03:26.867 SO libspdk_bdev_null.so.6.0 00:03:26.867 SO libspdk_bdev_malloc.so.6.0 00:03:27.163 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:27.163 SYMLINK libspdk_bdev_null.so 00:03:27.163 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:27.163 SYMLINK libspdk_bdev_malloc.so 00:03:27.163 CC module/bdev/split/vbdev_split_rpc.o 00:03:27.163 CC module/bdev/nvme/vbdev_opal.o 00:03:27.163 CC module/bdev/xnvme/bdev_xnvme.o 00:03:27.163 LIB libspdk_bdev_passthru.a 00:03:27.163 SO libspdk_bdev_passthru.so.6.0 00:03:27.163 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:27.163 CC module/bdev/aio/bdev_aio.o 00:03:27.163 LIB libspdk_bdev_split.a 00:03:27.163 LIB libspdk_bdev_zone_block.a 00:03:27.163 SYMLINK libspdk_bdev_passthru.so 00:03:27.163 CC module/bdev/aio/bdev_aio_rpc.o 00:03:27.163 SO libspdk_bdev_zone_block.so.6.0 00:03:27.163 SO libspdk_bdev_split.so.6.0 00:03:27.163 SYMLINK libspdk_bdev_zone_block.so 00:03:27.163 SYMLINK libspdk_bdev_split.so 00:03:27.163 CC module/bdev/raid/bdev_raid_rpc.o 00:03:27.421 CC module/bdev/raid/bdev_raid_sb.o 00:03:27.421 LIB libspdk_bdev_xnvme.a 00:03:27.421 SO libspdk_bdev_xnvme.so.3.0 00:03:27.421 CC module/bdev/ftl/bdev_ftl.o 00:03:27.421 CC module/bdev/raid/raid0.o 00:03:27.421 SYMLINK libspdk_bdev_xnvme.so 00:03:27.421 CC module/bdev/raid/raid1.o 00:03:27.421 CC module/bdev/iscsi/bdev_iscsi.o 00:03:27.421 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:27.421 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:27.681 LIB libspdk_bdev_aio.a 00:03:27.681 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:27.681 SO libspdk_bdev_aio.so.6.0 00:03:27.681 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:27.681 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:27.681 CC module/bdev/raid/concat.o 00:03:27.681 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:27.681 SYMLINK libspdk_bdev_aio.so 00:03:27.681 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:27.681 LIB libspdk_bdev_iscsi.a 00:03:27.681 LIB libspdk_bdev_ftl.a 00:03:27.681 SO libspdk_bdev_iscsi.so.6.0 00:03:27.681 SO libspdk_bdev_ftl.so.6.0 00:03:27.938 LIB libspdk_bdev_raid.a 00:03:27.938 SYMLINK libspdk_bdev_ftl.so 00:03:27.938 SYMLINK libspdk_bdev_iscsi.so 00:03:27.938 SO libspdk_bdev_raid.so.6.0 00:03:27.938 LIB libspdk_bdev_virtio.a 00:03:27.938 SYMLINK libspdk_bdev_raid.so 00:03:27.938 SO libspdk_bdev_virtio.so.6.0 00:03:27.938 SYMLINK libspdk_bdev_virtio.so 00:03:28.872 LIB libspdk_bdev_nvme.a 00:03:29.130 SO libspdk_bdev_nvme.so.7.0 00:03:29.130 SYMLINK libspdk_bdev_nvme.so 00:03:29.388 CC module/event/subsystems/keyring/keyring.o 00:03:29.388 CC module/event/subsystems/iobuf/iobuf.o 00:03:29.388 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:29.388 CC module/event/subsystems/scheduler/scheduler.o 00:03:29.388 CC module/event/subsystems/sock/sock.o 00:03:29.647 CC module/event/subsystems/vmd/vmd.o 00:03:29.647 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:29.647 CC module/event/subsystems/fsdev/fsdev.o 00:03:29.647 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:29.647 LIB libspdk_event_scheduler.a 00:03:29.647 LIB libspdk_event_fsdev.a 00:03:29.647 LIB libspdk_event_keyring.a 00:03:29.647 SO libspdk_event_scheduler.so.4.0 00:03:29.647 LIB libspdk_event_iobuf.a 00:03:29.647 LIB libspdk_event_sock.a 00:03:29.647 LIB libspdk_event_vmd.a 00:03:29.647 SO libspdk_event_fsdev.so.1.0 00:03:29.647 SO libspdk_event_keyring.so.1.0 00:03:29.647 LIB libspdk_event_vhost_blk.a 00:03:29.647 SO libspdk_event_iobuf.so.3.0 00:03:29.647 SO libspdk_event_sock.so.5.0 00:03:29.647 SO libspdk_event_vmd.so.6.0 00:03:29.647 SYMLINK libspdk_event_scheduler.so 00:03:29.647 SO libspdk_event_vhost_blk.so.3.0 00:03:29.647 SYMLINK libspdk_event_keyring.so 00:03:29.647 SYMLINK libspdk_event_fsdev.so 00:03:29.647 SYMLINK libspdk_event_sock.so 00:03:29.647 SYMLINK libspdk_event_iobuf.so 00:03:29.647 SYMLINK libspdk_event_vmd.so 00:03:29.647 SYMLINK libspdk_event_vhost_blk.so 00:03:29.905 CC module/event/subsystems/accel/accel.o 00:03:30.164 LIB libspdk_event_accel.a 00:03:30.164 SO libspdk_event_accel.so.6.0 00:03:30.164 SYMLINK libspdk_event_accel.so 00:03:30.422 CC module/event/subsystems/bdev/bdev.o 00:03:30.422 LIB libspdk_event_bdev.a 00:03:30.422 SO libspdk_event_bdev.so.6.0 00:03:30.681 SYMLINK libspdk_event_bdev.so 00:03:30.681 CC module/event/subsystems/scsi/scsi.o 00:03:30.681 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.681 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.681 CC module/event/subsystems/nbd/nbd.o 00:03:30.681 CC module/event/subsystems/ublk/ublk.o 00:03:30.939 LIB libspdk_event_nbd.a 00:03:30.939 SO libspdk_event_nbd.so.6.0 00:03:30.939 LIB libspdk_event_ublk.a 00:03:30.939 LIB libspdk_event_scsi.a 00:03:30.939 SO libspdk_event_ublk.so.3.0 00:03:30.939 SO libspdk_event_scsi.so.6.0 00:03:30.939 SYMLINK libspdk_event_nbd.so 00:03:30.939 SYMLINK libspdk_event_ublk.so 00:03:30.939 SYMLINK libspdk_event_scsi.so 00:03:30.939 LIB libspdk_event_nvmf.a 00:03:30.939 SO libspdk_event_nvmf.so.6.0 00:03:30.939 SYMLINK libspdk_event_nvmf.so 00:03:31.197 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:31.197 CC module/event/subsystems/iscsi/iscsi.o 00:03:31.197 LIB libspdk_event_vhost_scsi.a 00:03:31.197 LIB libspdk_event_iscsi.a 00:03:31.197 SO libspdk_event_vhost_scsi.so.3.0 00:03:31.197 SO libspdk_event_iscsi.so.6.0 00:03:31.197 SYMLINK libspdk_event_vhost_scsi.so 00:03:31.197 SYMLINK libspdk_event_iscsi.so 00:03:31.455 SO libspdk.so.6.0 00:03:31.455 SYMLINK libspdk.so 00:03:31.713 TEST_HEADER include/spdk/accel.h 00:03:31.713 CC test/rpc_client/rpc_client_test.o 00:03:31.713 TEST_HEADER include/spdk/accel_module.h 00:03:31.713 TEST_HEADER include/spdk/assert.h 00:03:31.713 TEST_HEADER include/spdk/barrier.h 00:03:31.713 CXX app/trace/trace.o 00:03:31.713 TEST_HEADER include/spdk/base64.h 00:03:31.713 TEST_HEADER include/spdk/bdev.h 00:03:31.713 TEST_HEADER include/spdk/bdev_module.h 00:03:31.713 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.713 TEST_HEADER include/spdk/bit_array.h 00:03:31.713 TEST_HEADER include/spdk/bit_pool.h 00:03:31.713 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.713 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.713 TEST_HEADER include/spdk/blobfs.h 00:03:31.713 TEST_HEADER include/spdk/blob.h 00:03:31.713 TEST_HEADER include/spdk/conf.h 00:03:31.713 TEST_HEADER include/spdk/config.h 00:03:31.713 TEST_HEADER include/spdk/cpuset.h 00:03:31.714 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.714 TEST_HEADER include/spdk/crc16.h 00:03:31.714 TEST_HEADER include/spdk/crc32.h 00:03:31.714 TEST_HEADER include/spdk/crc64.h 00:03:31.714 TEST_HEADER include/spdk/dif.h 00:03:31.714 TEST_HEADER include/spdk/dma.h 00:03:31.714 TEST_HEADER include/spdk/endian.h 00:03:31.714 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.714 TEST_HEADER include/spdk/env.h 00:03:31.714 TEST_HEADER include/spdk/event.h 00:03:31.714 TEST_HEADER include/spdk/fd_group.h 00:03:31.714 TEST_HEADER include/spdk/fd.h 00:03:31.714 TEST_HEADER include/spdk/file.h 00:03:31.714 TEST_HEADER include/spdk/fsdev.h 00:03:31.714 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.714 TEST_HEADER include/spdk/ftl.h 00:03:31.714 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.714 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.714 TEST_HEADER include/spdk/hexlify.h 00:03:31.714 CC test/thread/poller_perf/poller_perf.o 00:03:31.714 TEST_HEADER include/spdk/histogram_data.h 00:03:31.714 TEST_HEADER include/spdk/idxd.h 00:03:31.714 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.714 CC examples/ioat/perf/perf.o 00:03:31.714 TEST_HEADER include/spdk/init.h 00:03:31.714 TEST_HEADER include/spdk/ioat.h 00:03:31.714 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.714 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.714 TEST_HEADER include/spdk/json.h 00:03:31.714 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.714 TEST_HEADER include/spdk/keyring.h 00:03:31.714 TEST_HEADER include/spdk/keyring_module.h 00:03:31.714 TEST_HEADER include/spdk/likely.h 00:03:31.714 TEST_HEADER include/spdk/log.h 00:03:31.714 TEST_HEADER include/spdk/lvol.h 00:03:31.714 TEST_HEADER include/spdk/md5.h 00:03:31.714 CC examples/util/zipf/zipf.o 00:03:31.714 TEST_HEADER include/spdk/memory.h 00:03:31.714 TEST_HEADER include/spdk/mmio.h 00:03:31.714 TEST_HEADER include/spdk/nbd.h 00:03:31.714 TEST_HEADER include/spdk/net.h 00:03:31.714 TEST_HEADER include/spdk/notify.h 00:03:31.714 TEST_HEADER include/spdk/nvme.h 00:03:31.714 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.714 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.714 CC test/dma/test_dma/test_dma.o 00:03:31.714 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.714 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.714 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.714 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.714 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.714 TEST_HEADER include/spdk/nvmf.h 00:03:31.714 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.714 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.714 TEST_HEADER include/spdk/opal.h 00:03:31.714 TEST_HEADER include/spdk/opal_spec.h 00:03:31.714 TEST_HEADER include/spdk/pci_ids.h 00:03:31.714 TEST_HEADER include/spdk/pipe.h 00:03:31.714 TEST_HEADER include/spdk/queue.h 00:03:31.714 CC test/app/bdev_svc/bdev_svc.o 00:03:31.714 CC test/env/mem_callbacks/mem_callbacks.o 00:03:31.714 TEST_HEADER include/spdk/reduce.h 00:03:31.714 TEST_HEADER include/spdk/rpc.h 00:03:31.714 TEST_HEADER include/spdk/scheduler.h 00:03:31.714 TEST_HEADER include/spdk/scsi.h 00:03:31.714 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.714 TEST_HEADER include/spdk/sock.h 00:03:31.714 TEST_HEADER include/spdk/stdinc.h 00:03:31.714 TEST_HEADER include/spdk/string.h 00:03:31.714 TEST_HEADER include/spdk/thread.h 00:03:31.714 TEST_HEADER include/spdk/trace.h 00:03:31.714 TEST_HEADER include/spdk/trace_parser.h 00:03:31.714 TEST_HEADER include/spdk/tree.h 00:03:31.714 TEST_HEADER include/spdk/ublk.h 00:03:31.714 TEST_HEADER include/spdk/util.h 00:03:31.714 TEST_HEADER include/spdk/uuid.h 00:03:31.714 TEST_HEADER include/spdk/version.h 00:03:31.714 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.714 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.714 TEST_HEADER include/spdk/vhost.h 00:03:31.714 TEST_HEADER include/spdk/vmd.h 00:03:31.714 TEST_HEADER include/spdk/xor.h 00:03:31.714 TEST_HEADER include/spdk/zipf.h 00:03:31.714 CXX test/cpp_headers/accel.o 00:03:31.714 LINK rpc_client_test 00:03:31.714 LINK poller_perf 00:03:31.714 LINK interrupt_tgt 00:03:31.714 LINK zipf 00:03:31.972 LINK ioat_perf 00:03:31.972 CXX test/cpp_headers/accel_module.o 00:03:31.972 LINK bdev_svc 00:03:31.972 LINK spdk_trace 00:03:31.972 CC app/trace_record/trace_record.o 00:03:31.972 CXX test/cpp_headers/assert.o 00:03:31.972 CC app/nvmf_tgt/nvmf_main.o 00:03:31.972 CC app/spdk_tgt/spdk_tgt.o 00:03:31.972 CC examples/ioat/verify/verify.o 00:03:31.972 CC app/iscsi_tgt/iscsi_tgt.o 00:03:32.229 LINK mem_callbacks 00:03:32.229 CC test/app/histogram_perf/histogram_perf.o 00:03:32.229 CXX test/cpp_headers/barrier.o 00:03:32.229 LINK test_dma 00:03:32.229 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.229 LINK nvmf_tgt 00:03:32.229 LINK spdk_tgt 00:03:32.229 LINK iscsi_tgt 00:03:32.229 LINK spdk_trace_record 00:03:32.229 LINK verify 00:03:32.229 LINK histogram_perf 00:03:32.229 CXX test/cpp_headers/base64.o 00:03:32.229 CC test/env/vtophys/vtophys.o 00:03:32.229 CXX test/cpp_headers/bdev.o 00:03:32.229 CXX test/cpp_headers/bdev_module.o 00:03:32.487 CXX test/cpp_headers/bdev_zone.o 00:03:32.487 CXX test/cpp_headers/bit_array.o 00:03:32.487 CXX test/cpp_headers/bit_pool.o 00:03:32.487 LINK vtophys 00:03:32.487 CC app/spdk_lspci/spdk_lspci.o 00:03:32.487 CXX test/cpp_headers/blob_bdev.o 00:03:32.487 LINK nvme_fuzz 00:03:32.487 CC examples/sock/hello_world/hello_sock.o 00:03:32.487 CC examples/thread/thread/thread_ex.o 00:03:32.487 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.487 LINK spdk_lspci 00:03:32.487 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:32.746 CC examples/idxd/perf/perf.o 00:03:32.746 CC test/event/event_perf/event_perf.o 00:03:32.746 CC test/nvme/aer/aer.o 00:03:32.746 CXX test/cpp_headers/blobfs_bdev.o 00:03:32.746 LINK lsvmd 00:03:32.746 LINK env_dpdk_post_init 00:03:32.746 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:32.746 LINK thread 00:03:32.746 LINK event_perf 00:03:32.746 CC app/spdk_nvme_perf/perf.o 00:03:32.746 LINK hello_sock 00:03:32.746 CXX test/cpp_headers/blobfs.o 00:03:33.004 LINK idxd_perf 00:03:33.004 CC examples/vmd/led/led.o 00:03:33.004 LINK aer 00:03:33.004 CC test/env/memory/memory_ut.o 00:03:33.004 CC test/event/reactor/reactor.o 00:03:33.004 CXX test/cpp_headers/blob.o 00:03:33.004 CC test/event/reactor_perf/reactor_perf.o 00:03:33.004 CC test/accel/dif/dif.o 00:03:33.004 LINK led 00:03:33.004 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:33.004 LINK reactor 00:03:33.262 CXX test/cpp_headers/conf.o 00:03:33.262 LINK reactor_perf 00:03:33.262 CC test/nvme/reset/reset.o 00:03:33.262 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:33.262 CXX test/cpp_headers/config.o 00:03:33.262 CXX test/cpp_headers/cpuset.o 00:03:33.262 CC test/app/jsoncat/jsoncat.o 00:03:33.520 CC examples/nvme/hello_world/hello_world.o 00:03:33.521 CC test/event/app_repeat/app_repeat.o 00:03:33.521 CXX test/cpp_headers/crc16.o 00:03:33.521 LINK jsoncat 00:03:33.521 LINK reset 00:03:33.521 CXX test/cpp_headers/crc32.o 00:03:33.521 LINK app_repeat 00:03:33.521 LINK hello_world 00:03:33.521 LINK spdk_nvme_perf 00:03:33.779 LINK vhost_fuzz 00:03:33.779 CXX test/cpp_headers/crc64.o 00:03:33.779 CC test/env/pci/pci_ut.o 00:03:33.779 CC test/nvme/sgl/sgl.o 00:03:33.779 CXX test/cpp_headers/dif.o 00:03:33.779 CC examples/nvme/reconnect/reconnect.o 00:03:33.779 CC test/event/scheduler/scheduler.o 00:03:33.779 LINK dif 00:03:33.779 CC app/spdk_nvme_identify/identify.o 00:03:33.779 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.037 CXX test/cpp_headers/dma.o 00:03:34.037 LINK sgl 00:03:34.037 LINK scheduler 00:03:34.037 LINK pci_ut 00:03:34.037 LINK memory_ut 00:03:34.037 LINK spdk_nvme_discover 00:03:34.037 CXX test/cpp_headers/endian.o 00:03:34.037 CC app/spdk_top/spdk_top.o 00:03:34.037 LINK reconnect 00:03:34.037 LINK iscsi_fuzz 00:03:34.296 CC test/nvme/e2edp/nvme_dp.o 00:03:34.296 CXX test/cpp_headers/env_dpdk.o 00:03:34.296 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.296 CC examples/nvme/arbitration/arbitration.o 00:03:34.296 CC examples/accel/perf/accel_perf.o 00:03:34.296 CXX test/cpp_headers/env.o 00:03:34.296 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:34.296 CC test/app/stub/stub.o 00:03:34.296 LINK nvme_dp 00:03:34.554 CC examples/blob/hello_world/hello_blob.o 00:03:34.554 CXX test/cpp_headers/event.o 00:03:34.554 LINK stub 00:03:34.554 LINK arbitration 00:03:34.554 LINK spdk_nvme_identify 00:03:34.554 LINK hello_fsdev 00:03:34.554 CC test/nvme/overhead/overhead.o 00:03:34.554 CXX test/cpp_headers/fd_group.o 00:03:34.554 LINK hello_blob 00:03:34.554 CXX test/cpp_headers/fd.o 00:03:34.813 LINK nvme_manage 00:03:34.813 CXX test/cpp_headers/file.o 00:03:34.813 CC test/nvme/err_injection/err_injection.o 00:03:34.813 CC examples/blob/cli/blobcli.o 00:03:34.813 CC test/nvme/startup/startup.o 00:03:34.813 LINK overhead 00:03:34.813 LINK accel_perf 00:03:34.813 CC app/vhost/vhost.o 00:03:34.813 CXX test/cpp_headers/fsdev.o 00:03:34.813 CC app/spdk_dd/spdk_dd.o 00:03:34.813 CC examples/nvme/hotplug/hotplug.o 00:03:34.813 LINK startup 00:03:35.071 LINK err_injection 00:03:35.071 CXX test/cpp_headers/fsdev_module.o 00:03:35.071 LINK vhost 00:03:35.071 LINK spdk_top 00:03:35.071 CC test/nvme/reserve/reserve.o 00:03:35.071 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.071 LINK hotplug 00:03:35.071 CXX test/cpp_headers/ftl.o 00:03:35.071 CC test/nvme/simple_copy/simple_copy.o 00:03:35.071 LINK spdk_dd 00:03:35.071 CC test/nvme/connect_stress/connect_stress.o 00:03:35.329 CC test/nvme/boot_partition/boot_partition.o 00:03:35.329 CXX test/cpp_headers/fuse_dispatcher.o 00:03:35.329 LINK cmb_copy 00:03:35.329 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.329 LINK reserve 00:03:35.329 LINK blobcli 00:03:35.329 CC app/fio/nvme/fio_plugin.o 00:03:35.329 CXX test/cpp_headers/gpt_spec.o 00:03:35.329 LINK boot_partition 00:03:35.329 LINK connect_stress 00:03:35.329 LINK simple_copy 00:03:35.329 CC app/fio/bdev/fio_plugin.o 00:03:35.329 CC examples/nvme/abort/abort.o 00:03:35.329 LINK hello_bdev 00:03:35.329 CXX test/cpp_headers/hexlify.o 00:03:35.587 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.587 CC test/blobfs/mkfs/mkfs.o 00:03:35.587 CC test/nvme/compliance/nvme_compliance.o 00:03:35.587 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.587 CXX test/cpp_headers/histogram_data.o 00:03:35.587 CC examples/bdev/bdevperf/bdevperf.o 00:03:35.587 CXX test/cpp_headers/idxd.o 00:03:35.587 LINK pmr_persistence 00:03:35.587 CXX test/cpp_headers/idxd_spec.o 00:03:35.587 LINK mkfs 00:03:35.587 LINK fused_ordering 00:03:35.587 CXX test/cpp_headers/init.o 00:03:35.845 CXX test/cpp_headers/ioat.o 00:03:35.845 LINK spdk_nvme 00:03:35.845 LINK abort 00:03:35.845 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.845 CXX test/cpp_headers/ioat_spec.o 00:03:35.845 CC test/nvme/fdp/fdp.o 00:03:35.845 LINK nvme_compliance 00:03:35.845 CXX test/cpp_headers/iscsi_spec.o 00:03:35.845 LINK spdk_bdev 00:03:35.845 LINK doorbell_aers 00:03:35.845 CC test/nvme/cuse/cuse.o 00:03:35.845 CXX test/cpp_headers/json.o 00:03:36.103 CC test/bdev/bdevio/bdevio.o 00:03:36.103 CXX test/cpp_headers/jsonrpc.o 00:03:36.103 CC test/lvol/esnap/esnap.o 00:03:36.103 CXX test/cpp_headers/keyring.o 00:03:36.103 CXX test/cpp_headers/keyring_module.o 00:03:36.103 CXX test/cpp_headers/likely.o 00:03:36.103 CXX test/cpp_headers/log.o 00:03:36.103 LINK fdp 00:03:36.103 CXX test/cpp_headers/lvol.o 00:03:36.103 CXX test/cpp_headers/md5.o 00:03:36.103 CXX test/cpp_headers/memory.o 00:03:36.103 CXX test/cpp_headers/mmio.o 00:03:36.103 CXX test/cpp_headers/nbd.o 00:03:36.103 LINK bdevperf 00:03:36.103 CXX test/cpp_headers/net.o 00:03:36.103 CXX test/cpp_headers/notify.o 00:03:36.361 CXX test/cpp_headers/nvme.o 00:03:36.361 CXX test/cpp_headers/nvme_intel.o 00:03:36.361 CXX test/cpp_headers/nvme_ocssd.o 00:03:36.361 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.361 CXX test/cpp_headers/nvme_spec.o 00:03:36.361 CXX test/cpp_headers/nvme_zns.o 00:03:36.361 LINK bdevio 00:03:36.361 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.361 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.361 CXX test/cpp_headers/nvmf.o 00:03:36.361 CXX test/cpp_headers/nvmf_spec.o 00:03:36.361 CXX test/cpp_headers/nvmf_transport.o 00:03:36.361 CXX test/cpp_headers/opal.o 00:03:36.622 CC examples/nvmf/nvmf/nvmf.o 00:03:36.622 CXX test/cpp_headers/opal_spec.o 00:03:36.622 CXX test/cpp_headers/pci_ids.o 00:03:36.622 CXX test/cpp_headers/pipe.o 00:03:36.622 CXX test/cpp_headers/queue.o 00:03:36.622 CXX test/cpp_headers/reduce.o 00:03:36.622 CXX test/cpp_headers/rpc.o 00:03:36.622 CXX test/cpp_headers/scheduler.o 00:03:36.622 CXX test/cpp_headers/scsi.o 00:03:36.622 CXX test/cpp_headers/scsi_spec.o 00:03:36.622 CXX test/cpp_headers/sock.o 00:03:36.622 CXX test/cpp_headers/stdinc.o 00:03:36.622 CXX test/cpp_headers/string.o 00:03:36.622 CXX test/cpp_headers/thread.o 00:03:36.622 CXX test/cpp_headers/trace.o 00:03:36.880 CXX test/cpp_headers/trace_parser.o 00:03:36.880 CXX test/cpp_headers/tree.o 00:03:36.880 CXX test/cpp_headers/ublk.o 00:03:36.880 CXX test/cpp_headers/util.o 00:03:36.880 CXX test/cpp_headers/uuid.o 00:03:36.880 CXX test/cpp_headers/version.o 00:03:36.880 LINK nvmf 00:03:36.880 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.880 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.880 CXX test/cpp_headers/vhost.o 00:03:36.880 CXX test/cpp_headers/vmd.o 00:03:36.880 CXX test/cpp_headers/xor.o 00:03:36.880 CXX test/cpp_headers/zipf.o 00:03:37.138 LINK cuse 00:03:41.353 LINK esnap 00:03:41.353 00:03:41.353 real 1m7.354s 00:03:41.353 user 6m10.113s 00:03:41.353 sys 1m5.263s 00:03:41.353 03:54:34 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:41.353 03:54:34 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.353 ************************************ 00:03:41.353 END TEST make 00:03:41.353 ************************************ 00:03:41.353 03:54:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.353 03:54:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.353 03:54:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.353 03:54:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.353 03:54:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.353 03:54:34 -- pm/common@44 -- $ pid=5058 00:03:41.353 03:54:34 -- pm/common@50 -- $ kill -TERM 5058 00:03:41.353 03:54:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.353 03:54:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.353 03:54:34 -- pm/common@44 -- $ pid=5059 00:03:41.353 03:54:34 -- pm/common@50 -- $ kill -TERM 5059 00:03:41.353 03:54:34 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.353 03:54:34 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.353 03:54:34 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.353 03:54:34 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.353 03:54:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.353 03:54:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.353 03:54:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.353 03:54:34 -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.353 03:54:34 -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.353 03:54:34 -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.353 03:54:34 -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.353 03:54:34 -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.353 03:54:34 -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.353 03:54:34 -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.353 03:54:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.353 03:54:34 -- scripts/common.sh@344 -- # case "$op" in 00:03:41.353 03:54:34 -- scripts/common.sh@345 -- # : 1 00:03:41.353 03:54:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.353 03:54:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.353 03:54:34 -- scripts/common.sh@365 -- # decimal 1 00:03:41.353 03:54:34 -- scripts/common.sh@353 -- # local d=1 00:03:41.353 03:54:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.353 03:54:34 -- scripts/common.sh@355 -- # echo 1 00:03:41.353 03:54:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.353 03:54:34 -- scripts/common.sh@366 -- # decimal 2 00:03:41.353 03:54:34 -- scripts/common.sh@353 -- # local d=2 00:03:41.353 03:54:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.353 03:54:34 -- scripts/common.sh@355 -- # echo 2 00:03:41.353 03:54:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.353 03:54:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.353 03:54:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.353 03:54:34 -- scripts/common.sh@368 -- # return 0 00:03:41.353 03:54:34 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.353 03:54:34 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.353 --rc genhtml_branch_coverage=1 00:03:41.353 --rc genhtml_function_coverage=1 00:03:41.353 --rc genhtml_legend=1 00:03:41.353 --rc geninfo_all_blocks=1 00:03:41.353 --rc geninfo_unexecuted_blocks=1 00:03:41.353 00:03:41.353 ' 00:03:41.353 03:54:34 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.353 --rc genhtml_branch_coverage=1 00:03:41.353 --rc genhtml_function_coverage=1 00:03:41.353 --rc genhtml_legend=1 00:03:41.353 --rc geninfo_all_blocks=1 00:03:41.353 --rc geninfo_unexecuted_blocks=1 00:03:41.353 00:03:41.353 ' 00:03:41.353 03:54:34 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.353 --rc genhtml_branch_coverage=1 00:03:41.353 --rc genhtml_function_coverage=1 00:03:41.353 --rc genhtml_legend=1 00:03:41.353 --rc geninfo_all_blocks=1 00:03:41.353 --rc geninfo_unexecuted_blocks=1 00:03:41.353 00:03:41.353 ' 00:03:41.353 03:54:34 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.353 --rc genhtml_branch_coverage=1 00:03:41.353 --rc genhtml_function_coverage=1 00:03:41.353 --rc genhtml_legend=1 00:03:41.353 --rc geninfo_all_blocks=1 00:03:41.353 --rc geninfo_unexecuted_blocks=1 00:03:41.353 00:03:41.353 ' 00:03:41.353 03:54:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:41.353 03:54:34 -- nvmf/common.sh@7 -- # uname -s 00:03:41.353 03:54:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.353 03:54:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.353 03:54:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.353 03:54:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.354 03:54:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.354 03:54:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.354 03:54:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.354 03:54:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.354 03:54:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.354 03:54:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.354 03:54:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a36ad734-f3af-489b-a2e0-f6600d3595d9 00:03:41.354 03:54:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=a36ad734-f3af-489b-a2e0-f6600d3595d9 00:03:41.354 03:54:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.354 03:54:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.354 03:54:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:41.354 03:54:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.354 03:54:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:41.354 03:54:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:41.354 03:54:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.354 03:54:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.354 03:54:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.354 03:54:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.354 03:54:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.354 03:54:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.354 03:54:34 -- paths/export.sh@5 -- # export PATH 00:03:41.354 03:54:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.354 03:54:34 -- nvmf/common.sh@51 -- # : 0 00:03:41.354 03:54:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:41.354 03:54:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:41.354 03:54:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.354 03:54:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.354 03:54:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.354 03:54:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:41.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:41.354 03:54:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:41.354 03:54:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:41.354 03:54:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:41.354 03:54:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.354 03:54:34 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.354 03:54:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.354 03:54:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.354 03:54:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.354 03:54:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.354 03:54:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.354 03:54:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.354 03:54:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.354 03:54:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.354 03:54:34 -- spdk/autotest.sh@48 -- # udevadm_pid=54652 00:03:41.354 03:54:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.354 03:54:34 -- pm/common@17 -- # local monitor 00:03:41.354 03:54:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.354 03:54:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.354 03:54:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.354 03:54:34 -- pm/common@25 -- # sleep 1 00:03:41.354 03:54:34 -- pm/common@21 -- # date +%s 00:03:41.354 03:54:34 -- pm/common@21 -- # date +%s 00:03:41.354 03:54:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728791674 00:03:41.354 03:54:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728791674 00:03:41.354 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728791674_collect-vmstat.pm.log 00:03:41.354 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728791674_collect-cpu-load.pm.log 00:03:42.297 03:54:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.297 03:54:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.297 03:54:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.297 03:54:35 -- common/autotest_common.sh@10 -- # set +x 00:03:42.297 03:54:35 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.297 03:54:35 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:42.297 03:54:35 -- common/autotest_common.sh@10 -- # set +x 00:03:42.297 03:54:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:42.297 03:54:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:42.297 03:54:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:42.297 03:54:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.297 03:54:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.297 03:54:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.297 03:54:35 -- common/autotest_common.sh@1455 -- # uname 00:03:42.297 03:54:35 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:42.297 03:54:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.297 03:54:35 -- common/autotest_common.sh@1475 -- # uname 00:03:42.297 03:54:35 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:42.297 03:54:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:42.297 03:54:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:42.297 lcov: LCOV version 1.15 00:03:42.297 03:54:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:57.164 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:57.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:09.360 03:55:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:09.360 03:55:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.360 03:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.360 03:55:02 -- spdk/autotest.sh@78 -- # rm -f 00:04:09.360 03:55:02 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.183 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:10.183 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:10.183 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:10.183 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:10.183 03:55:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:10.183 03:55:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:10.183 03:55:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:10.183 03:55:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:10.183 03:55:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:10.183 03:55:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:10.183 03:55:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:10.183 03:55:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:10.183 03:55:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:10.183 03:55:03 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:10.183 03:55:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:10.183 03:55:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:10.183 03:55:03 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:10.183 03:55:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:10.183 03:55:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:10.183 03:55:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:10.183 03:55:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:10.183 03:55:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:10.183 03:55:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:10.183 03:55:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.183 03:55:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.183 03:55:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:10.183 03:55:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:10.183 03:55:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:10.442 No valid GPT data, bailing 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # pt= 00:04:10.442 03:55:03 -- scripts/common.sh@395 -- # return 1 00:04:10.442 03:55:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:10.442 1+0 records in 00:04:10.442 1+0 records out 00:04:10.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0094263 s, 111 MB/s 00:04:10.442 03:55:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.442 03:55:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.442 03:55:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:10.442 03:55:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:10.442 03:55:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:10.442 No valid GPT data, bailing 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # pt= 00:04:10.442 03:55:03 -- scripts/common.sh@395 -- # return 1 00:04:10.442 03:55:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:10.442 1+0 records in 00:04:10.442 1+0 records out 00:04:10.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392929 s, 267 MB/s 00:04:10.442 03:55:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.442 03:55:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.442 03:55:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:10.442 03:55:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:10.442 03:55:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:10.442 No valid GPT data, bailing 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # pt= 00:04:10.442 03:55:03 -- scripts/common.sh@395 -- # return 1 00:04:10.442 03:55:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:10.442 1+0 records in 00:04:10.442 1+0 records out 00:04:10.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435679 s, 241 MB/s 00:04:10.442 03:55:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.442 03:55:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.442 03:55:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:10.442 03:55:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:10.442 03:55:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:10.442 No valid GPT data, bailing 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:10.442 03:55:03 -- scripts/common.sh@394 -- # pt= 00:04:10.442 03:55:03 -- scripts/common.sh@395 -- # return 1 00:04:10.442 03:55:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:10.442 1+0 records in 00:04:10.442 1+0 records out 00:04:10.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00355069 s, 295 MB/s 00:04:10.442 03:55:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.442 03:55:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.442 03:55:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:10.442 03:55:03 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:10.442 03:55:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:10.700 No valid GPT data, bailing 00:04:10.700 03:55:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:10.700 03:55:03 -- scripts/common.sh@394 -- # pt= 00:04:10.700 03:55:03 -- scripts/common.sh@395 -- # return 1 00:04:10.700 03:55:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:10.700 1+0 records in 00:04:10.700 1+0 records out 00:04:10.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361296 s, 290 MB/s 00:04:10.700 03:55:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:10.700 03:55:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:10.700 03:55:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:10.700 03:55:03 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:10.700 03:55:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:10.700 No valid GPT data, bailing 00:04:10.700 03:55:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:10.700 03:55:03 -- scripts/common.sh@394 -- # pt= 00:04:10.700 03:55:03 -- scripts/common.sh@395 -- # return 1 00:04:10.700 03:55:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:10.700 1+0 records in 00:04:10.700 1+0 records out 00:04:10.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464544 s, 226 MB/s 00:04:10.700 03:55:03 -- spdk/autotest.sh@105 -- # sync 00:04:10.700 03:55:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:10.700 03:55:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:10.700 03:55:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:12.603 03:55:05 -- spdk/autotest.sh@111 -- # uname -s 00:04:12.603 03:55:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:12.603 03:55:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:12.603 03:55:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.169 Hugepages 00:04:13.169 node hugesize free / total 00:04:13.169 node0 1048576kB 0 / 0 00:04:13.169 node0 2048kB 0 / 0 00:04:13.169 00:04:13.169 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.169 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:13.169 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:13.169 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:13.169 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:13.430 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:13.430 03:55:06 -- spdk/autotest.sh@117 -- # uname -s 00:04:13.430 03:55:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:13.430 03:55:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:13.430 03:55:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.289 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.289 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.289 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.289 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.547 03:55:07 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:15.481 03:55:08 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:15.481 03:55:08 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:15.481 03:55:08 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:15.481 03:55:08 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:15.481 03:55:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:15.481 03:55:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:15.481 03:55:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.481 03:55:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.481 03:55:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:15.481 03:55:08 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:15.481 03:55:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:15.481 03:55:08 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.997 Waiting for block devices as requested 00:04:15.997 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.997 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.997 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:16.256 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.522 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:21.523 03:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.523 03:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:21.523 03:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:21.523 03:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1541 -- # continue 00:04:21.523 03:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:21.523 03:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1541 -- # continue 00:04:21.523 03:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:21.523 03:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1541 -- # continue 00:04:21.523 03:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:21.523 03:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:21.523 03:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:21.523 03:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:21.523 03:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:21.523 03:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:21.523 03:55:14 -- common/autotest_common.sh@1541 -- # continue 00:04:21.523 03:55:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.523 03:55:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.523 03:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:21.523 03:55:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.523 03:55:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.523 03:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:21.523 03:55:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.346 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.346 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.346 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.346 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.605 03:55:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.605 03:55:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.605 03:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:22.605 03:55:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.605 03:55:15 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:22.605 03:55:15 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.605 03:55:15 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:22.605 03:55:15 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:22.605 03:55:15 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:22.605 03:55:15 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.605 03:55:15 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:22.605 03:55:15 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:22.605 03:55:15 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:22.605 03:55:15 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.605 03:55:15 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.605 03:55:15 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:22.605 03:55:15 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:22.605 03:55:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:22.605 03:55:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:22.605 03:55:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.605 03:55:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:22.605 03:55:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.605 03:55:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:22.605 03:55:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.605 03:55:15 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:22.605 03:55:15 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:22.605 03:55:15 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.605 03:55:15 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:22.605 03:55:15 -- common/autotest_common.sh@1570 -- # return 0 00:04:22.605 03:55:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:22.605 03:55:15 -- common/autotest_common.sh@1578 -- # return 0 00:04:22.605 03:55:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:22.605 03:55:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:22.605 03:55:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.605 03:55:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.605 03:55:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:22.605 03:55:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.605 03:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:22.605 03:55:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:22.605 03:55:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.605 03:55:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.605 03:55:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.605 03:55:15 -- common/autotest_common.sh@10 -- # set +x 00:04:22.605 ************************************ 00:04:22.605 START TEST env 00:04:22.605 ************************************ 00:04:22.605 03:55:15 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.605 * Looking for test storage... 00:04:22.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.605 03:55:15 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.605 03:55:15 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.605 03:55:15 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:22.864 03:55:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.864 03:55:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.864 03:55:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.864 03:55:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.864 03:55:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.864 03:55:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.864 03:55:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.864 03:55:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.864 03:55:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.864 03:55:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.864 03:55:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.864 03:55:15 env -- scripts/common.sh@344 -- # case "$op" in 00:04:22.864 03:55:15 env -- scripts/common.sh@345 -- # : 1 00:04:22.864 03:55:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.864 03:55:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.864 03:55:15 env -- scripts/common.sh@365 -- # decimal 1 00:04:22.864 03:55:15 env -- scripts/common.sh@353 -- # local d=1 00:04:22.864 03:55:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.864 03:55:15 env -- scripts/common.sh@355 -- # echo 1 00:04:22.864 03:55:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.864 03:55:15 env -- scripts/common.sh@366 -- # decimal 2 00:04:22.864 03:55:15 env -- scripts/common.sh@353 -- # local d=2 00:04:22.864 03:55:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.864 03:55:15 env -- scripts/common.sh@355 -- # echo 2 00:04:22.864 03:55:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.864 03:55:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.864 03:55:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.864 03:55:15 env -- scripts/common.sh@368 -- # return 0 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:22.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.864 --rc genhtml_branch_coverage=1 00:04:22.864 --rc genhtml_function_coverage=1 00:04:22.864 --rc genhtml_legend=1 00:04:22.864 --rc geninfo_all_blocks=1 00:04:22.864 --rc geninfo_unexecuted_blocks=1 00:04:22.864 00:04:22.864 ' 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:22.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.864 --rc genhtml_branch_coverage=1 00:04:22.864 --rc genhtml_function_coverage=1 00:04:22.864 --rc genhtml_legend=1 00:04:22.864 --rc geninfo_all_blocks=1 00:04:22.864 --rc geninfo_unexecuted_blocks=1 00:04:22.864 00:04:22.864 ' 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:22.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.864 --rc genhtml_branch_coverage=1 00:04:22.864 --rc genhtml_function_coverage=1 00:04:22.864 --rc genhtml_legend=1 00:04:22.864 --rc geninfo_all_blocks=1 00:04:22.864 --rc geninfo_unexecuted_blocks=1 00:04:22.864 00:04:22.864 ' 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:22.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.864 --rc genhtml_branch_coverage=1 00:04:22.864 --rc genhtml_function_coverage=1 00:04:22.864 --rc genhtml_legend=1 00:04:22.864 --rc geninfo_all_blocks=1 00:04:22.864 --rc geninfo_unexecuted_blocks=1 00:04:22.864 00:04:22.864 ' 00:04:22.864 03:55:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.864 03:55:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.864 03:55:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.864 ************************************ 00:04:22.864 START TEST env_memory 00:04:22.864 ************************************ 00:04:22.864 03:55:15 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.864 00:04:22.864 00:04:22.864 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.864 http://cunit.sourceforge.net/ 00:04:22.864 00:04:22.864 00:04:22.864 Suite: memory 00:04:22.864 Test: alloc and free memory map ...[2024-10-13 03:55:15.891218] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:22.864 passed 00:04:22.864 Test: mem map translation ...[2024-10-13 03:55:15.929848] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:22.864 [2024-10-13 03:55:15.929887] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:22.864 [2024-10-13 03:55:15.929944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:22.864 [2024-10-13 03:55:15.929958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:22.864 passed 00:04:22.864 Test: mem map registration ...[2024-10-13 03:55:15.997919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:22.864 [2024-10-13 03:55:15.997955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.123 passed 00:04:23.123 Test: mem map adjacent registrations ...passed 00:04:23.123 00:04:23.123 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.123 suites 1 1 n/a 0 0 00:04:23.123 tests 4 4 4 0 0 00:04:23.123 asserts 152 152 152 0 n/a 00:04:23.123 00:04:23.123 Elapsed time = 0.233 seconds 00:04:23.123 00:04:23.123 real 0m0.267s 00:04:23.123 user 0m0.240s 00:04:23.123 sys 0m0.019s 00:04:23.123 03:55:16 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.123 ************************************ 00:04:23.123 END TEST env_memory 00:04:23.123 ************************************ 00:04:23.123 03:55:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.123 03:55:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.123 03:55:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.123 03:55:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.123 03:55:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.123 ************************************ 00:04:23.123 START TEST env_vtophys 00:04:23.123 ************************************ 00:04:23.123 03:55:16 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.123 EAL: lib.eal log level changed from notice to debug 00:04:23.123 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.123 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.123 EAL: Maximum logical cores by configuration: 128 00:04:23.123 EAL: Detected CPU lcores: 10 00:04:23.123 EAL: Detected NUMA nodes: 1 00:04:23.123 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.123 EAL: Detected shared linkage of DPDK 00:04:23.123 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.123 EAL: Selected IOVA mode 'PA' 00:04:23.123 EAL: Probing VFIO support... 00:04:23.123 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.123 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.123 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.123 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.123 EAL: Setting up physically contiguous memory... 00:04:23.123 EAL: Setting maximum number of open files to 524288 00:04:23.123 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.123 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.123 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.123 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.123 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.123 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.123 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.123 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.123 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.123 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.123 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.123 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.123 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.123 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.123 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.123 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.123 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.123 EAL: Hugepages will be freed exactly as allocated. 00:04:23.123 EAL: No shared files mode enabled, IPC is disabled 00:04:23.123 EAL: No shared files mode enabled, IPC is disabled 00:04:23.382 EAL: TSC frequency is ~2600000 KHz 00:04:23.382 EAL: Main lcore 0 is ready (tid=7feeeb2b3a40;cpuset=[0]) 00:04:23.382 EAL: Trying to obtain current memory policy. 00:04:23.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.382 EAL: Restoring previous memory policy: 0 00:04:23.382 EAL: request: mp_malloc_sync 00:04:23.382 EAL: No shared files mode enabled, IPC is disabled 00:04:23.382 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.382 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.382 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.382 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.382 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.382 00:04:23.382 00:04:23.382 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.382 http://cunit.sourceforge.net/ 00:04:23.382 00:04:23.382 00:04:23.382 Suite: components_suite 00:04:23.640 Test: vtophys_malloc_test ...passed 00:04:23.640 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.640 EAL: Restoring previous memory policy: 4 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.640 EAL: Trying to obtain current memory policy. 00:04:23.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.640 EAL: Restoring previous memory policy: 4 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.640 EAL: Trying to obtain current memory policy. 00:04:23.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.640 EAL: Restoring previous memory policy: 4 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.640 EAL: Trying to obtain current memory policy. 00:04:23.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.640 EAL: Restoring previous memory policy: 4 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.640 EAL: Trying to obtain current memory policy. 00:04:23.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.640 EAL: Restoring previous memory policy: 4 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.640 EAL: Trying to obtain current memory policy. 00:04:23.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.640 EAL: Restoring previous memory policy: 4 00:04:23.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.640 EAL: request: mp_malloc_sync 00:04:23.640 EAL: No shared files mode enabled, IPC is disabled 00:04:23.640 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.901 EAL: request: mp_malloc_sync 00:04:23.901 EAL: No shared files mode enabled, IPC is disabled 00:04:23.901 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.901 EAL: Trying to obtain current memory policy. 00:04:23.901 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.901 EAL: Restoring previous memory policy: 4 00:04:23.901 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.901 EAL: request: mp_malloc_sync 00:04:23.901 EAL: No shared files mode enabled, IPC is disabled 00:04:23.901 EAL: Heap on socket 0 was expanded by 130MB 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was shrunk by 130MB 00:04:24.160 EAL: Trying to obtain current memory policy. 00:04:24.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.160 EAL: Restoring previous memory policy: 4 00:04:24.160 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.160 EAL: request: mp_malloc_sync 00:04:24.160 EAL: No shared files mode enabled, IPC is disabled 00:04:24.160 EAL: Heap on socket 0 was expanded by 258MB 00:04:24.419 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.419 EAL: request: mp_malloc_sync 00:04:24.419 EAL: No shared files mode enabled, IPC is disabled 00:04:24.419 EAL: Heap on socket 0 was shrunk by 258MB 00:04:24.677 EAL: Trying to obtain current memory policy. 00:04:24.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.935 EAL: Restoring previous memory policy: 4 00:04:24.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.935 EAL: request: mp_malloc_sync 00:04:24.935 EAL: No shared files mode enabled, IPC is disabled 00:04:24.935 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.194 EAL: request: mp_malloc_sync 00:04:25.194 EAL: No shared files mode enabled, IPC is disabled 00:04:25.194 EAL: Heap on socket 0 was shrunk by 514MB 00:04:25.760 EAL: Trying to obtain current memory policy. 00:04:25.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.760 EAL: Restoring previous memory policy: 4 00:04:25.760 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.760 EAL: request: mp_malloc_sync 00:04:25.760 EAL: No shared files mode enabled, IPC is disabled 00:04:25.760 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.695 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.695 EAL: request: mp_malloc_sync 00:04:26.695 EAL: No shared files mode enabled, IPC is disabled 00:04:26.695 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:27.628 passed 00:04:27.628 00:04:27.628 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.628 suites 1 1 n/a 0 0 00:04:27.628 tests 2 2 2 0 0 00:04:27.628 asserts 5859 5859 5859 0 n/a 00:04:27.628 00:04:27.628 Elapsed time = 4.225 seconds 00:04:27.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.628 EAL: request: mp_malloc_sync 00:04:27.628 EAL: No shared files mode enabled, IPC is disabled 00:04:27.629 EAL: Heap on socket 0 was shrunk by 2MB 00:04:27.629 EAL: No shared files mode enabled, IPC is disabled 00:04:27.629 EAL: No shared files mode enabled, IPC is disabled 00:04:27.629 EAL: No shared files mode enabled, IPC is disabled 00:04:27.629 00:04:27.629 real 0m4.471s 00:04:27.629 user 0m3.721s 00:04:27.629 sys 0m0.607s 00:04:27.629 03:55:20 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.629 ************************************ 00:04:27.629 END TEST env_vtophys 00:04:27.629 ************************************ 00:04:27.629 03:55:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:27.629 03:55:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:27.629 03:55:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.629 03:55:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.629 03:55:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.629 ************************************ 00:04:27.629 START TEST env_pci 00:04:27.629 ************************************ 00:04:27.629 03:55:20 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:27.629 00:04:27.629 00:04:27.629 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.629 http://cunit.sourceforge.net/ 00:04:27.629 00:04:27.629 00:04:27.629 Suite: pci 00:04:27.629 Test: pci_hook ...[2024-10-13 03:55:20.726590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57380 has claimed it 00:04:27.629 passed 00:04:27.629 00:04:27.629 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.629 suites 1 1 n/a 0 0 00:04:27.629 tests 1 1 1 0 0 00:04:27.629 asserts 25 25 25 0 n/a 00:04:27.629 00:04:27.629 Elapsed time = 0.006 seconds 00:04:27.629 EAL: Cannot find device (10000:00:01.0) 00:04:27.629 EAL: Failed to attach device on primary process 00:04:27.629 00:04:27.629 real 0m0.059s 00:04:27.629 user 0m0.029s 00:04:27.629 sys 0m0.029s 00:04:27.629 03:55:20 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.629 ************************************ 00:04:27.629 END TEST env_pci 00:04:27.629 ************************************ 00:04:27.629 03:55:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:27.887 03:55:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:27.887 03:55:20 env -- env/env.sh@15 -- # uname 00:04:27.887 03:55:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:27.887 03:55:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:27.887 03:55:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.887 03:55:20 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:27.887 03:55:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.887 03:55:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.887 ************************************ 00:04:27.887 START TEST env_dpdk_post_init 00:04:27.887 ************************************ 00:04:27.887 03:55:20 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.887 EAL: Detected CPU lcores: 10 00:04:27.887 EAL: Detected NUMA nodes: 1 00:04:27.887 EAL: Detected shared linkage of DPDK 00:04:27.887 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.887 EAL: Selected IOVA mode 'PA' 00:04:27.887 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.887 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:27.887 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:27.887 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:27.887 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:28.145 Starting DPDK initialization... 00:04:28.145 Starting SPDK post initialization... 00:04:28.145 SPDK NVMe probe 00:04:28.145 Attaching to 0000:00:10.0 00:04:28.145 Attaching to 0000:00:11.0 00:04:28.145 Attaching to 0000:00:12.0 00:04:28.145 Attaching to 0000:00:13.0 00:04:28.145 Attached to 0000:00:13.0 00:04:28.145 Attached to 0000:00:10.0 00:04:28.145 Attached to 0000:00:11.0 00:04:28.145 Attached to 0000:00:12.0 00:04:28.145 Cleaning up... 00:04:28.145 00:04:28.145 real 0m0.233s 00:04:28.145 user 0m0.064s 00:04:28.145 sys 0m0.071s 00:04:28.145 03:55:21 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.145 ************************************ 00:04:28.145 END TEST env_dpdk_post_init 00:04:28.145 ************************************ 00:04:28.145 03:55:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.145 03:55:21 env -- env/env.sh@26 -- # uname 00:04:28.145 03:55:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:28.145 03:55:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.145 03:55:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.145 03:55:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.145 03:55:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.145 ************************************ 00:04:28.145 START TEST env_mem_callbacks 00:04:28.145 ************************************ 00:04:28.145 03:55:21 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.145 EAL: Detected CPU lcores: 10 00:04:28.145 EAL: Detected NUMA nodes: 1 00:04:28.145 EAL: Detected shared linkage of DPDK 00:04:28.145 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.145 EAL: Selected IOVA mode 'PA' 00:04:28.145 00:04:28.145 00:04:28.145 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.145 http://cunit.sourceforge.net/ 00:04:28.145 00:04:28.145 00:04:28.145 Suite: memory 00:04:28.145 Test: test ... 00:04:28.145 register 0x200000200000 2097152 00:04:28.145 malloc 3145728 00:04:28.145 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.145 register 0x200000400000 4194304 00:04:28.145 buf 0x2000004fffc0 len 3145728 PASSED 00:04:28.145 malloc 64 00:04:28.145 buf 0x2000004ffec0 len 64 PASSED 00:04:28.145 malloc 4194304 00:04:28.145 register 0x200000800000 6291456 00:04:28.145 buf 0x2000009fffc0 len 4194304 PASSED 00:04:28.145 free 0x2000004fffc0 3145728 00:04:28.145 free 0x2000004ffec0 64 00:04:28.145 unregister 0x200000400000 4194304 PASSED 00:04:28.145 free 0x2000009fffc0 4194304 00:04:28.145 unregister 0x200000800000 6291456 PASSED 00:04:28.145 malloc 8388608 00:04:28.145 register 0x200000400000 10485760 00:04:28.145 buf 0x2000005fffc0 len 8388608 PASSED 00:04:28.145 free 0x2000005fffc0 8388608 00:04:28.404 unregister 0x200000400000 10485760 PASSED 00:04:28.404 passed 00:04:28.404 00:04:28.404 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.404 suites 1 1 n/a 0 0 00:04:28.404 tests 1 1 1 0 0 00:04:28.404 asserts 15 15 15 0 n/a 00:04:28.404 00:04:28.404 Elapsed time = 0.043 seconds 00:04:28.404 00:04:28.404 real 0m0.211s 00:04:28.404 user 0m0.060s 00:04:28.404 sys 0m0.048s 00:04:28.404 ************************************ 00:04:28.404 END TEST env_mem_callbacks 00:04:28.404 03:55:21 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.404 03:55:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.404 ************************************ 00:04:28.404 ************************************ 00:04:28.404 END TEST env 00:04:28.404 ************************************ 00:04:28.404 00:04:28.404 real 0m5.698s 00:04:28.404 user 0m4.253s 00:04:28.404 sys 0m0.988s 00:04:28.404 03:55:21 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.404 03:55:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.404 03:55:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:28.404 03:55:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.404 03:55:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.404 03:55:21 -- common/autotest_common.sh@10 -- # set +x 00:04:28.404 ************************************ 00:04:28.404 START TEST rpc 00:04:28.404 ************************************ 00:04:28.404 03:55:21 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:28.404 * Looking for test storage... 00:04:28.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.404 03:55:21 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.404 03:55:21 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.404 03:55:21 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.662 03:55:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.662 03:55:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.662 03:55:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.662 03:55:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.662 03:55:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.662 03:55:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.662 03:55:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.662 03:55:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.662 03:55:21 rpc -- scripts/common.sh@345 -- # : 1 00:04:28.662 03:55:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.662 03:55:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.662 03:55:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.662 03:55:21 rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.662 03:55:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.662 03:55:21 rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.662 03:55:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.662 03:55:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@353 -- # local d=2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.662 03:55:21 rpc -- scripts/common.sh@355 -- # echo 2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.662 03:55:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.662 03:55:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.662 03:55:21 rpc -- scripts/common.sh@368 -- # return 0 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.662 --rc genhtml_branch_coverage=1 00:04:28.662 --rc genhtml_function_coverage=1 00:04:28.662 --rc genhtml_legend=1 00:04:28.662 --rc geninfo_all_blocks=1 00:04:28.662 --rc geninfo_unexecuted_blocks=1 00:04:28.662 00:04:28.662 ' 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.662 --rc genhtml_branch_coverage=1 00:04:28.662 --rc genhtml_function_coverage=1 00:04:28.662 --rc genhtml_legend=1 00:04:28.662 --rc geninfo_all_blocks=1 00:04:28.662 --rc geninfo_unexecuted_blocks=1 00:04:28.662 00:04:28.662 ' 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.662 --rc genhtml_branch_coverage=1 00:04:28.662 --rc genhtml_function_coverage=1 00:04:28.662 --rc genhtml_legend=1 00:04:28.662 --rc geninfo_all_blocks=1 00:04:28.662 --rc geninfo_unexecuted_blocks=1 00:04:28.662 00:04:28.662 ' 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.662 --rc genhtml_branch_coverage=1 00:04:28.662 --rc genhtml_function_coverage=1 00:04:28.662 --rc genhtml_legend=1 00:04:28.662 --rc geninfo_all_blocks=1 00:04:28.662 --rc geninfo_unexecuted_blocks=1 00:04:28.662 00:04:28.662 ' 00:04:28.662 03:55:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57502 00:04:28.662 03:55:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.662 03:55:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57502 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@831 -- # '[' -z 57502 ']' 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.662 03:55:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.662 03:55:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.662 [2024-10-13 03:55:21.657250] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:28.663 [2024-10-13 03:55:21.657374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57502 ] 00:04:28.663 [2024-10-13 03:55:21.807075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.921 [2024-10-13 03:55:21.900900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:28.921 [2024-10-13 03:55:21.900949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57502' to capture a snapshot of events at runtime. 00:04:28.921 [2024-10-13 03:55:21.900958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:28.921 [2024-10-13 03:55:21.900968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:28.921 [2024-10-13 03:55:21.900975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57502 for offline analysis/debug. 00:04:28.921 [2024-10-13 03:55:21.901823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.489 03:55:22 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.489 03:55:22 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:29.489 03:55:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.489 03:55:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.489 03:55:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:29.489 03:55:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:29.489 03:55:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.489 03:55:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.489 03:55:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.489 ************************************ 00:04:29.489 START TEST rpc_integrity 00:04:29.489 ************************************ 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.489 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.489 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.489 { 00:04:29.489 "name": "Malloc0", 00:04:29.489 "aliases": [ 00:04:29.489 "6cdef5bf-af0c-4f19-99fd-0b2d7268af32" 00:04:29.489 ], 00:04:29.489 "product_name": "Malloc disk", 00:04:29.489 "block_size": 512, 00:04:29.489 "num_blocks": 16384, 00:04:29.489 "uuid": "6cdef5bf-af0c-4f19-99fd-0b2d7268af32", 00:04:29.489 "assigned_rate_limits": { 00:04:29.489 "rw_ios_per_sec": 0, 00:04:29.489 "rw_mbytes_per_sec": 0, 00:04:29.489 "r_mbytes_per_sec": 0, 00:04:29.489 "w_mbytes_per_sec": 0 00:04:29.489 }, 00:04:29.489 "claimed": false, 00:04:29.489 "zoned": false, 00:04:29.489 "supported_io_types": { 00:04:29.489 "read": true, 00:04:29.489 "write": true, 00:04:29.489 "unmap": true, 00:04:29.489 "flush": true, 00:04:29.489 "reset": true, 00:04:29.489 "nvme_admin": false, 00:04:29.489 "nvme_io": false, 00:04:29.489 "nvme_io_md": false, 00:04:29.489 "write_zeroes": true, 00:04:29.489 "zcopy": true, 00:04:29.489 "get_zone_info": false, 00:04:29.489 "zone_management": false, 00:04:29.490 "zone_append": false, 00:04:29.490 "compare": false, 00:04:29.490 "compare_and_write": false, 00:04:29.490 "abort": true, 00:04:29.490 "seek_hole": false, 00:04:29.490 "seek_data": false, 00:04:29.490 "copy": true, 00:04:29.490 "nvme_iov_md": false 00:04:29.490 }, 00:04:29.490 "memory_domains": [ 00:04:29.490 { 00:04:29.490 "dma_device_id": "system", 00:04:29.490 "dma_device_type": 1 00:04:29.490 }, 00:04:29.490 { 00:04:29.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.490 "dma_device_type": 2 00:04:29.490 } 00:04:29.490 ], 00:04:29.490 "driver_specific": {} 00:04:29.490 } 00:04:29.490 ]' 00:04:29.490 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.490 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.490 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:29.490 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.490 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.490 [2024-10-13 03:55:22.613066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:29.490 [2024-10-13 03:55:22.613122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.490 [2024-10-13 03:55:22.613151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:29.490 [2024-10-13 03:55:22.613162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.490 [2024-10-13 03:55:22.615299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.490 [2024-10-13 03:55:22.615341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.490 Passthru0 00:04:29.490 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.490 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.490 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.490 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.490 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.490 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.490 { 00:04:29.490 "name": "Malloc0", 00:04:29.490 "aliases": [ 00:04:29.490 "6cdef5bf-af0c-4f19-99fd-0b2d7268af32" 00:04:29.490 ], 00:04:29.490 "product_name": "Malloc disk", 00:04:29.490 "block_size": 512, 00:04:29.490 "num_blocks": 16384, 00:04:29.490 "uuid": "6cdef5bf-af0c-4f19-99fd-0b2d7268af32", 00:04:29.490 "assigned_rate_limits": { 00:04:29.490 "rw_ios_per_sec": 0, 00:04:29.490 "rw_mbytes_per_sec": 0, 00:04:29.490 "r_mbytes_per_sec": 0, 00:04:29.490 "w_mbytes_per_sec": 0 00:04:29.490 }, 00:04:29.490 "claimed": true, 00:04:29.490 "claim_type": "exclusive_write", 00:04:29.490 "zoned": false, 00:04:29.490 "supported_io_types": { 00:04:29.490 "read": true, 00:04:29.490 "write": true, 00:04:29.490 "unmap": true, 00:04:29.490 "flush": true, 00:04:29.490 "reset": true, 00:04:29.490 "nvme_admin": false, 00:04:29.490 "nvme_io": false, 00:04:29.490 "nvme_io_md": false, 00:04:29.490 "write_zeroes": true, 00:04:29.490 "zcopy": true, 00:04:29.490 "get_zone_info": false, 00:04:29.490 "zone_management": false, 00:04:29.490 "zone_append": false, 00:04:29.490 "compare": false, 00:04:29.490 "compare_and_write": false, 00:04:29.490 "abort": true, 00:04:29.490 "seek_hole": false, 00:04:29.490 "seek_data": false, 00:04:29.490 "copy": true, 00:04:29.490 "nvme_iov_md": false 00:04:29.490 }, 00:04:29.490 "memory_domains": [ 00:04:29.490 { 00:04:29.490 "dma_device_id": "system", 00:04:29.490 "dma_device_type": 1 00:04:29.490 }, 00:04:29.490 { 00:04:29.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.490 "dma_device_type": 2 00:04:29.490 } 00:04:29.490 ], 00:04:29.490 "driver_specific": {} 00:04:29.490 }, 00:04:29.490 { 00:04:29.490 "name": "Passthru0", 00:04:29.490 "aliases": [ 00:04:29.490 "01bc0f6a-1378-5e26-8fa9-66872b4efac4" 00:04:29.490 ], 00:04:29.490 "product_name": "passthru", 00:04:29.490 "block_size": 512, 00:04:29.490 "num_blocks": 16384, 00:04:29.490 "uuid": "01bc0f6a-1378-5e26-8fa9-66872b4efac4", 00:04:29.490 "assigned_rate_limits": { 00:04:29.490 "rw_ios_per_sec": 0, 00:04:29.490 "rw_mbytes_per_sec": 0, 00:04:29.490 "r_mbytes_per_sec": 0, 00:04:29.490 "w_mbytes_per_sec": 0 00:04:29.490 }, 00:04:29.490 "claimed": false, 00:04:29.490 "zoned": false, 00:04:29.490 "supported_io_types": { 00:04:29.490 "read": true, 00:04:29.490 "write": true, 00:04:29.490 "unmap": true, 00:04:29.490 "flush": true, 00:04:29.490 "reset": true, 00:04:29.490 "nvme_admin": false, 00:04:29.490 "nvme_io": false, 00:04:29.490 "nvme_io_md": false, 00:04:29.490 "write_zeroes": true, 00:04:29.490 "zcopy": true, 00:04:29.490 "get_zone_info": false, 00:04:29.490 "zone_management": false, 00:04:29.490 "zone_append": false, 00:04:29.490 "compare": false, 00:04:29.490 "compare_and_write": false, 00:04:29.490 "abort": true, 00:04:29.490 "seek_hole": false, 00:04:29.490 "seek_data": false, 00:04:29.490 "copy": true, 00:04:29.490 "nvme_iov_md": false 00:04:29.490 }, 00:04:29.490 "memory_domains": [ 00:04:29.490 { 00:04:29.490 "dma_device_id": "system", 00:04:29.490 "dma_device_type": 1 00:04:29.490 }, 00:04:29.490 { 00:04:29.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.490 "dma_device_type": 2 00:04:29.490 } 00:04:29.490 ], 00:04:29.490 "driver_specific": { 00:04:29.490 "passthru": { 00:04:29.490 "name": "Passthru0", 00:04:29.490 "base_bdev_name": "Malloc0" 00:04:29.490 } 00:04:29.490 } 00:04:29.490 } 00:04:29.490 ]' 00:04:29.490 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.792 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.792 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.792 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.792 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.792 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.792 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.792 03:55:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.792 00:04:29.792 real 0m0.242s 00:04:29.792 user 0m0.129s 00:04:29.792 sys 0m0.031s 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.792 ************************************ 00:04:29.792 END TEST rpc_integrity 00:04:29.792 ************************************ 00:04:29.792 03:55:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.792 03:55:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:29.792 03:55:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.792 03:55:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.792 03:55:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.792 ************************************ 00:04:29.792 START TEST rpc_plugins 00:04:29.792 ************************************ 00:04:29.792 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:29.792 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:29.792 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.792 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.792 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.792 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:29.792 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:29.792 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.792 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.792 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.792 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:29.792 { 00:04:29.792 "name": "Malloc1", 00:04:29.792 "aliases": [ 00:04:29.792 "6a49aa16-6dd9-4ccd-b8fd-480acf41fad2" 00:04:29.792 ], 00:04:29.792 "product_name": "Malloc disk", 00:04:29.792 "block_size": 4096, 00:04:29.792 "num_blocks": 256, 00:04:29.792 "uuid": "6a49aa16-6dd9-4ccd-b8fd-480acf41fad2", 00:04:29.792 "assigned_rate_limits": { 00:04:29.792 "rw_ios_per_sec": 0, 00:04:29.792 "rw_mbytes_per_sec": 0, 00:04:29.792 "r_mbytes_per_sec": 0, 00:04:29.792 "w_mbytes_per_sec": 0 00:04:29.792 }, 00:04:29.792 "claimed": false, 00:04:29.792 "zoned": false, 00:04:29.792 "supported_io_types": { 00:04:29.792 "read": true, 00:04:29.792 "write": true, 00:04:29.792 "unmap": true, 00:04:29.792 "flush": true, 00:04:29.792 "reset": true, 00:04:29.792 "nvme_admin": false, 00:04:29.792 "nvme_io": false, 00:04:29.792 "nvme_io_md": false, 00:04:29.792 "write_zeroes": true, 00:04:29.792 "zcopy": true, 00:04:29.792 "get_zone_info": false, 00:04:29.792 "zone_management": false, 00:04:29.792 "zone_append": false, 00:04:29.792 "compare": false, 00:04:29.792 "compare_and_write": false, 00:04:29.792 "abort": true, 00:04:29.792 "seek_hole": false, 00:04:29.792 "seek_data": false, 00:04:29.792 "copy": true, 00:04:29.792 "nvme_iov_md": false 00:04:29.793 }, 00:04:29.793 "memory_domains": [ 00:04:29.793 { 00:04:29.793 "dma_device_id": "system", 00:04:29.793 "dma_device_type": 1 00:04:29.793 }, 00:04:29.793 { 00:04:29.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.793 "dma_device_type": 2 00:04:29.793 } 00:04:29.793 ], 00:04:29.793 "driver_specific": {} 00:04:29.793 } 00:04:29.793 ]' 00:04:29.793 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:29.793 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:29.793 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.793 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.793 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:29.793 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:29.793 03:55:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:29.793 00:04:29.793 real 0m0.119s 00:04:29.793 user 0m0.065s 00:04:29.793 sys 0m0.015s 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.793 03:55:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.793 ************************************ 00:04:29.793 END TEST rpc_plugins 00:04:29.793 ************************************ 00:04:30.054 03:55:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.054 03:55:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.054 03:55:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.054 03:55:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.054 ************************************ 00:04:30.054 START TEST rpc_trace_cmd_test 00:04:30.054 ************************************ 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.054 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57502", 00:04:30.054 "tpoint_group_mask": "0x8", 00:04:30.054 "iscsi_conn": { 00:04:30.054 "mask": "0x2", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "scsi": { 00:04:30.054 "mask": "0x4", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "bdev": { 00:04:30.054 "mask": "0x8", 00:04:30.054 "tpoint_mask": "0xffffffffffffffff" 00:04:30.054 }, 00:04:30.054 "nvmf_rdma": { 00:04:30.054 "mask": "0x10", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "nvmf_tcp": { 00:04:30.054 "mask": "0x20", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "ftl": { 00:04:30.054 "mask": "0x40", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "blobfs": { 00:04:30.054 "mask": "0x80", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "dsa": { 00:04:30.054 "mask": "0x200", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "thread": { 00:04:30.054 "mask": "0x400", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "nvme_pcie": { 00:04:30.054 "mask": "0x800", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "iaa": { 00:04:30.054 "mask": "0x1000", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "nvme_tcp": { 00:04:30.054 "mask": "0x2000", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "bdev_nvme": { 00:04:30.054 "mask": "0x4000", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "sock": { 00:04:30.054 "mask": "0x8000", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "blob": { 00:04:30.054 "mask": "0x10000", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "bdev_raid": { 00:04:30.054 "mask": "0x20000", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 }, 00:04:30.054 "scheduler": { 00:04:30.054 "mask": "0x40000", 00:04:30.054 "tpoint_mask": "0x0" 00:04:30.054 } 00:04:30.054 }' 00:04:30.054 03:55:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:30.054 00:04:30.054 real 0m0.172s 00:04:30.054 user 0m0.138s 00:04:30.054 sys 0m0.022s 00:04:30.054 ************************************ 00:04:30.054 END TEST rpc_trace_cmd_test 00:04:30.054 ************************************ 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.054 03:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.054 03:55:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:30.054 03:55:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:30.054 03:55:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:30.054 03:55:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.054 03:55:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.054 03:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.054 ************************************ 00:04:30.054 START TEST rpc_daemon_integrity 00:04:30.054 ************************************ 00:04:30.054 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.316 { 00:04:30.316 "name": "Malloc2", 00:04:30.316 "aliases": [ 00:04:30.316 "8f9db9a0-d71f-48fc-ada3-edf72553441c" 00:04:30.316 ], 00:04:30.316 "product_name": "Malloc disk", 00:04:30.316 "block_size": 512, 00:04:30.316 "num_blocks": 16384, 00:04:30.316 "uuid": "8f9db9a0-d71f-48fc-ada3-edf72553441c", 00:04:30.316 "assigned_rate_limits": { 00:04:30.316 "rw_ios_per_sec": 0, 00:04:30.316 "rw_mbytes_per_sec": 0, 00:04:30.316 "r_mbytes_per_sec": 0, 00:04:30.316 "w_mbytes_per_sec": 0 00:04:30.316 }, 00:04:30.316 "claimed": false, 00:04:30.316 "zoned": false, 00:04:30.316 "supported_io_types": { 00:04:30.316 "read": true, 00:04:30.316 "write": true, 00:04:30.316 "unmap": true, 00:04:30.316 "flush": true, 00:04:30.316 "reset": true, 00:04:30.316 "nvme_admin": false, 00:04:30.316 "nvme_io": false, 00:04:30.316 "nvme_io_md": false, 00:04:30.316 "write_zeroes": true, 00:04:30.316 "zcopy": true, 00:04:30.316 "get_zone_info": false, 00:04:30.316 "zone_management": false, 00:04:30.316 "zone_append": false, 00:04:30.316 "compare": false, 00:04:30.316 "compare_and_write": false, 00:04:30.316 "abort": true, 00:04:30.316 "seek_hole": false, 00:04:30.316 "seek_data": false, 00:04:30.316 "copy": true, 00:04:30.316 "nvme_iov_md": false 00:04:30.316 }, 00:04:30.316 "memory_domains": [ 00:04:30.316 { 00:04:30.316 "dma_device_id": "system", 00:04:30.316 "dma_device_type": 1 00:04:30.316 }, 00:04:30.316 { 00:04:30.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.316 "dma_device_type": 2 00:04:30.316 } 00:04:30.316 ], 00:04:30.316 "driver_specific": {} 00:04:30.316 } 00:04:30.316 ]' 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:30.316 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.317 [2024-10-13 03:55:23.324588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:30.317 [2024-10-13 03:55:23.324669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.317 [2024-10-13 03:55:23.324690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:30.317 [2024-10-13 03:55:23.324703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.317 [2024-10-13 03:55:23.327145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.317 [2024-10-13 03:55:23.327200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.317 Passthru0 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.317 { 00:04:30.317 "name": "Malloc2", 00:04:30.317 "aliases": [ 00:04:30.317 "8f9db9a0-d71f-48fc-ada3-edf72553441c" 00:04:30.317 ], 00:04:30.317 "product_name": "Malloc disk", 00:04:30.317 "block_size": 512, 00:04:30.317 "num_blocks": 16384, 00:04:30.317 "uuid": "8f9db9a0-d71f-48fc-ada3-edf72553441c", 00:04:30.317 "assigned_rate_limits": { 00:04:30.317 "rw_ios_per_sec": 0, 00:04:30.317 "rw_mbytes_per_sec": 0, 00:04:30.317 "r_mbytes_per_sec": 0, 00:04:30.317 "w_mbytes_per_sec": 0 00:04:30.317 }, 00:04:30.317 "claimed": true, 00:04:30.317 "claim_type": "exclusive_write", 00:04:30.317 "zoned": false, 00:04:30.317 "supported_io_types": { 00:04:30.317 "read": true, 00:04:30.317 "write": true, 00:04:30.317 "unmap": true, 00:04:30.317 "flush": true, 00:04:30.317 "reset": true, 00:04:30.317 "nvme_admin": false, 00:04:30.317 "nvme_io": false, 00:04:30.317 "nvme_io_md": false, 00:04:30.317 "write_zeroes": true, 00:04:30.317 "zcopy": true, 00:04:30.317 "get_zone_info": false, 00:04:30.317 "zone_management": false, 00:04:30.317 "zone_append": false, 00:04:30.317 "compare": false, 00:04:30.317 "compare_and_write": false, 00:04:30.317 "abort": true, 00:04:30.317 "seek_hole": false, 00:04:30.317 "seek_data": false, 00:04:30.317 "copy": true, 00:04:30.317 "nvme_iov_md": false 00:04:30.317 }, 00:04:30.317 "memory_domains": [ 00:04:30.317 { 00:04:30.317 "dma_device_id": "system", 00:04:30.317 "dma_device_type": 1 00:04:30.317 }, 00:04:30.317 { 00:04:30.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.317 "dma_device_type": 2 00:04:30.317 } 00:04:30.317 ], 00:04:30.317 "driver_specific": {} 00:04:30.317 }, 00:04:30.317 { 00:04:30.317 "name": "Passthru0", 00:04:30.317 "aliases": [ 00:04:30.317 "73b6dcd9-0d4b-58a6-8d31-a0c409f3b3a5" 00:04:30.317 ], 00:04:30.317 "product_name": "passthru", 00:04:30.317 "block_size": 512, 00:04:30.317 "num_blocks": 16384, 00:04:30.317 "uuid": "73b6dcd9-0d4b-58a6-8d31-a0c409f3b3a5", 00:04:30.317 "assigned_rate_limits": { 00:04:30.317 "rw_ios_per_sec": 0, 00:04:30.317 "rw_mbytes_per_sec": 0, 00:04:30.317 "r_mbytes_per_sec": 0, 00:04:30.317 "w_mbytes_per_sec": 0 00:04:30.317 }, 00:04:30.317 "claimed": false, 00:04:30.317 "zoned": false, 00:04:30.317 "supported_io_types": { 00:04:30.317 "read": true, 00:04:30.317 "write": true, 00:04:30.317 "unmap": true, 00:04:30.317 "flush": true, 00:04:30.317 "reset": true, 00:04:30.317 "nvme_admin": false, 00:04:30.317 "nvme_io": false, 00:04:30.317 "nvme_io_md": false, 00:04:30.317 "write_zeroes": true, 00:04:30.317 "zcopy": true, 00:04:30.317 "get_zone_info": false, 00:04:30.317 "zone_management": false, 00:04:30.317 "zone_append": false, 00:04:30.317 "compare": false, 00:04:30.317 "compare_and_write": false, 00:04:30.317 "abort": true, 00:04:30.317 "seek_hole": false, 00:04:30.317 "seek_data": false, 00:04:30.317 "copy": true, 00:04:30.317 "nvme_iov_md": false 00:04:30.317 }, 00:04:30.317 "memory_domains": [ 00:04:30.317 { 00:04:30.317 "dma_device_id": "system", 00:04:30.317 "dma_device_type": 1 00:04:30.317 }, 00:04:30.317 { 00:04:30.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.317 "dma_device_type": 2 00:04:30.317 } 00:04:30.317 ], 00:04:30.317 "driver_specific": { 00:04:30.317 "passthru": { 00:04:30.317 "name": "Passthru0", 00:04:30.317 "base_bdev_name": "Malloc2" 00:04:30.317 } 00:04:30.317 } 00:04:30.317 } 00:04:30.317 ]' 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.317 00:04:30.317 real 0m0.247s 00:04:30.317 user 0m0.122s 00:04:30.317 sys 0m0.040s 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.317 03:55:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.317 ************************************ 00:04:30.317 END TEST rpc_daemon_integrity 00:04:30.317 ************************************ 00:04:30.577 03:55:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:30.577 03:55:23 rpc -- rpc/rpc.sh@84 -- # killprocess 57502 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@950 -- # '[' -z 57502 ']' 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@954 -- # kill -0 57502 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@955 -- # uname 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57502 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57502' 00:04:30.577 killing process with pid 57502 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@969 -- # kill 57502 00:04:30.577 03:55:23 rpc -- common/autotest_common.sh@974 -- # wait 57502 00:04:31.951 00:04:31.951 real 0m3.484s 00:04:31.951 user 0m3.914s 00:04:31.951 sys 0m0.591s 00:04:31.951 03:55:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.951 ************************************ 00:04:31.951 END TEST rpc 00:04:31.951 ************************************ 00:04:31.951 03:55:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.951 03:55:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.951 03:55:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.951 03:55:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.951 03:55:24 -- common/autotest_common.sh@10 -- # set +x 00:04:31.951 ************************************ 00:04:31.951 START TEST skip_rpc 00:04:31.951 ************************************ 00:04:31.951 03:55:24 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.951 * Looking for test storage... 00:04:31.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.951 03:55:25 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:31.951 03:55:25 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:31.951 03:55:25 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:31.951 03:55:25 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.951 03:55:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.210 03:55:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.210 03:55:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.210 03:55:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.210 03:55:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.210 03:55:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.210 03:55:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.210 03:55:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.210 --rc genhtml_branch_coverage=1 00:04:32.210 --rc genhtml_function_coverage=1 00:04:32.210 --rc genhtml_legend=1 00:04:32.210 --rc geninfo_all_blocks=1 00:04:32.210 --rc geninfo_unexecuted_blocks=1 00:04:32.210 00:04:32.210 ' 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.210 --rc genhtml_branch_coverage=1 00:04:32.210 --rc genhtml_function_coverage=1 00:04:32.210 --rc genhtml_legend=1 00:04:32.210 --rc geninfo_all_blocks=1 00:04:32.210 --rc geninfo_unexecuted_blocks=1 00:04:32.210 00:04:32.210 ' 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.210 --rc genhtml_branch_coverage=1 00:04:32.210 --rc genhtml_function_coverage=1 00:04:32.210 --rc genhtml_legend=1 00:04:32.210 --rc geninfo_all_blocks=1 00:04:32.210 --rc geninfo_unexecuted_blocks=1 00:04:32.210 00:04:32.210 ' 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.210 --rc genhtml_branch_coverage=1 00:04:32.210 --rc genhtml_function_coverage=1 00:04:32.210 --rc genhtml_legend=1 00:04:32.210 --rc geninfo_all_blocks=1 00:04:32.210 --rc geninfo_unexecuted_blocks=1 00:04:32.210 00:04:32.210 ' 00:04:32.210 03:55:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.210 03:55:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.210 03:55:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.210 03:55:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.210 ************************************ 00:04:32.210 START TEST skip_rpc 00:04:32.210 ************************************ 00:04:32.210 03:55:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:32.210 03:55:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57714 00:04:32.210 03:55:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.210 03:55:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:32.210 03:55:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:32.210 [2024-10-13 03:55:25.190406] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:32.210 [2024-10-13 03:55:25.190528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57714 ] 00:04:32.210 [2024-10-13 03:55:25.340005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.469 [2024-10-13 03:55:25.424223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57714 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57714 ']' 00:04:37.750 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57714 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57714 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.751 killing process with pid 57714 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57714' 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57714 00:04:37.751 03:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57714 00:04:38.321 00:04:38.321 real 0m6.205s 00:04:38.321 user 0m5.853s 00:04:38.321 sys 0m0.253s 00:04:38.321 03:55:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.321 03:55:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.321 ************************************ 00:04:38.321 END TEST skip_rpc 00:04:38.321 ************************************ 00:04:38.321 03:55:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:38.321 03:55:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.321 03:55:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.321 03:55:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.321 ************************************ 00:04:38.321 START TEST skip_rpc_with_json 00:04:38.321 ************************************ 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57807 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57807 00:04:38.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57807 ']' 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.321 03:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.321 [2024-10-13 03:55:31.456967] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:38.321 [2024-10-13 03:55:31.457089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57807 ] 00:04:38.582 [2024-10-13 03:55:31.600265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.582 [2024-10-13 03:55:31.674531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.152 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.152 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:39.152 03:55:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:39.152 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.152 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.152 [2024-10-13 03:55:32.240705] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:39.153 request: 00:04:39.153 { 00:04:39.153 "trtype": "tcp", 00:04:39.153 "method": "nvmf_get_transports", 00:04:39.153 "req_id": 1 00:04:39.153 } 00:04:39.153 Got JSON-RPC error response 00:04:39.153 response: 00:04:39.153 { 00:04:39.153 "code": -19, 00:04:39.153 "message": "No such device" 00:04:39.153 } 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.153 [2024-10-13 03:55:32.248778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.153 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.413 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.413 03:55:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.413 { 00:04:39.413 "subsystems": [ 00:04:39.413 { 00:04:39.413 "subsystem": "fsdev", 00:04:39.413 "config": [ 00:04:39.413 { 00:04:39.413 "method": "fsdev_set_opts", 00:04:39.413 "params": { 00:04:39.413 "fsdev_io_pool_size": 65535, 00:04:39.413 "fsdev_io_cache_size": 256 00:04:39.413 } 00:04:39.413 } 00:04:39.413 ] 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "subsystem": "keyring", 00:04:39.413 "config": [] 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "subsystem": "iobuf", 00:04:39.413 "config": [ 00:04:39.413 { 00:04:39.413 "method": "iobuf_set_options", 00:04:39.413 "params": { 00:04:39.413 "small_pool_count": 8192, 00:04:39.413 "large_pool_count": 1024, 00:04:39.413 "small_bufsize": 8192, 00:04:39.413 "large_bufsize": 135168 00:04:39.413 } 00:04:39.413 } 00:04:39.413 ] 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "subsystem": "sock", 00:04:39.413 "config": [ 00:04:39.413 { 00:04:39.413 "method": "sock_set_default_impl", 00:04:39.413 "params": { 00:04:39.413 "impl_name": "posix" 00:04:39.413 } 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "method": "sock_impl_set_options", 00:04:39.413 "params": { 00:04:39.413 "impl_name": "ssl", 00:04:39.413 "recv_buf_size": 4096, 00:04:39.413 "send_buf_size": 4096, 00:04:39.413 "enable_recv_pipe": true, 00:04:39.413 "enable_quickack": false, 00:04:39.413 "enable_placement_id": 0, 00:04:39.413 "enable_zerocopy_send_server": true, 00:04:39.413 "enable_zerocopy_send_client": false, 00:04:39.413 "zerocopy_threshold": 0, 00:04:39.413 "tls_version": 0, 00:04:39.413 "enable_ktls": false 00:04:39.413 } 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "method": "sock_impl_set_options", 00:04:39.413 "params": { 00:04:39.413 "impl_name": "posix", 00:04:39.413 "recv_buf_size": 2097152, 00:04:39.413 "send_buf_size": 2097152, 00:04:39.413 "enable_recv_pipe": true, 00:04:39.413 "enable_quickack": false, 00:04:39.413 "enable_placement_id": 0, 00:04:39.413 "enable_zerocopy_send_server": true, 00:04:39.413 "enable_zerocopy_send_client": false, 00:04:39.413 "zerocopy_threshold": 0, 00:04:39.413 "tls_version": 0, 00:04:39.413 "enable_ktls": false 00:04:39.413 } 00:04:39.413 } 00:04:39.413 ] 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "subsystem": "vmd", 00:04:39.413 "config": [] 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "subsystem": "accel", 00:04:39.413 "config": [ 00:04:39.413 { 00:04:39.413 "method": "accel_set_options", 00:04:39.413 "params": { 00:04:39.413 "small_cache_size": 128, 00:04:39.413 "large_cache_size": 16, 00:04:39.413 "task_count": 2048, 00:04:39.413 "sequence_count": 2048, 00:04:39.413 "buf_count": 2048 00:04:39.413 } 00:04:39.413 } 00:04:39.413 ] 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "subsystem": "bdev", 00:04:39.413 "config": [ 00:04:39.413 { 00:04:39.413 "method": "bdev_set_options", 00:04:39.413 "params": { 00:04:39.413 "bdev_io_pool_size": 65535, 00:04:39.413 "bdev_io_cache_size": 256, 00:04:39.413 "bdev_auto_examine": true, 00:04:39.413 "iobuf_small_cache_size": 128, 00:04:39.413 "iobuf_large_cache_size": 16 00:04:39.413 } 00:04:39.413 }, 00:04:39.413 { 00:04:39.413 "method": "bdev_raid_set_options", 00:04:39.413 "params": { 00:04:39.413 "process_window_size_kb": 1024, 00:04:39.413 "process_max_bandwidth_mb_sec": 0 00:04:39.414 } 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "method": "bdev_iscsi_set_options", 00:04:39.414 "params": { 00:04:39.414 "timeout_sec": 30 00:04:39.414 } 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "method": "bdev_nvme_set_options", 00:04:39.414 "params": { 00:04:39.414 "action_on_timeout": "none", 00:04:39.414 "timeout_us": 0, 00:04:39.414 "timeout_admin_us": 0, 00:04:39.414 "keep_alive_timeout_ms": 10000, 00:04:39.414 "arbitration_burst": 0, 00:04:39.414 "low_priority_weight": 0, 00:04:39.414 "medium_priority_weight": 0, 00:04:39.414 "high_priority_weight": 0, 00:04:39.414 "nvme_adminq_poll_period_us": 10000, 00:04:39.414 "nvme_ioq_poll_period_us": 0, 00:04:39.414 "io_queue_requests": 0, 00:04:39.414 "delay_cmd_submit": true, 00:04:39.414 "transport_retry_count": 4, 00:04:39.414 "bdev_retry_count": 3, 00:04:39.414 "transport_ack_timeout": 0, 00:04:39.414 "ctrlr_loss_timeout_sec": 0, 00:04:39.414 "reconnect_delay_sec": 0, 00:04:39.414 "fast_io_fail_timeout_sec": 0, 00:04:39.414 "disable_auto_failback": false, 00:04:39.414 "generate_uuids": false, 00:04:39.414 "transport_tos": 0, 00:04:39.414 "nvme_error_stat": false, 00:04:39.414 "rdma_srq_size": 0, 00:04:39.414 "io_path_stat": false, 00:04:39.414 "allow_accel_sequence": false, 00:04:39.414 "rdma_max_cq_size": 0, 00:04:39.414 "rdma_cm_event_timeout_ms": 0, 00:04:39.414 "dhchap_digests": [ 00:04:39.414 "sha256", 00:04:39.414 "sha384", 00:04:39.414 "sha512" 00:04:39.414 ], 00:04:39.414 "dhchap_dhgroups": [ 00:04:39.414 "null", 00:04:39.414 "ffdhe2048", 00:04:39.414 "ffdhe3072", 00:04:39.414 "ffdhe4096", 00:04:39.414 "ffdhe6144", 00:04:39.414 "ffdhe8192" 00:04:39.414 ] 00:04:39.414 } 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "method": "bdev_nvme_set_hotplug", 00:04:39.414 "params": { 00:04:39.414 "period_us": 100000, 00:04:39.414 "enable": false 00:04:39.414 } 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "method": "bdev_wait_for_examine" 00:04:39.414 } 00:04:39.414 ] 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "scsi", 00:04:39.414 "config": null 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "scheduler", 00:04:39.414 "config": [ 00:04:39.414 { 00:04:39.414 "method": "framework_set_scheduler", 00:04:39.414 "params": { 00:04:39.414 "name": "static" 00:04:39.414 } 00:04:39.414 } 00:04:39.414 ] 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "vhost_scsi", 00:04:39.414 "config": [] 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "vhost_blk", 00:04:39.414 "config": [] 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "ublk", 00:04:39.414 "config": [] 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "nbd", 00:04:39.414 "config": [] 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "nvmf", 00:04:39.414 "config": [ 00:04:39.414 { 00:04:39.414 "method": "nvmf_set_config", 00:04:39.414 "params": { 00:04:39.414 "discovery_filter": "match_any", 00:04:39.414 "admin_cmd_passthru": { 00:04:39.414 "identify_ctrlr": false 00:04:39.414 }, 00:04:39.414 "dhchap_digests": [ 00:04:39.414 "sha256", 00:04:39.414 "sha384", 00:04:39.414 "sha512" 00:04:39.414 ], 00:04:39.414 "dhchap_dhgroups": [ 00:04:39.414 "null", 00:04:39.414 "ffdhe2048", 00:04:39.414 "ffdhe3072", 00:04:39.414 "ffdhe4096", 00:04:39.414 "ffdhe6144", 00:04:39.414 "ffdhe8192" 00:04:39.414 ] 00:04:39.414 } 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "method": "nvmf_set_max_subsystems", 00:04:39.414 "params": { 00:04:39.414 "max_subsystems": 1024 00:04:39.414 } 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "method": "nvmf_set_crdt", 00:04:39.414 "params": { 00:04:39.414 "crdt1": 0, 00:04:39.414 "crdt2": 0, 00:04:39.414 "crdt3": 0 00:04:39.414 } 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "method": "nvmf_create_transport", 00:04:39.414 "params": { 00:04:39.414 "trtype": "TCP", 00:04:39.414 "max_queue_depth": 128, 00:04:39.414 "max_io_qpairs_per_ctrlr": 127, 00:04:39.414 "in_capsule_data_size": 4096, 00:04:39.414 "max_io_size": 131072, 00:04:39.414 "io_unit_size": 131072, 00:04:39.414 "max_aq_depth": 128, 00:04:39.414 "num_shared_buffers": 511, 00:04:39.414 "buf_cache_size": 4294967295, 00:04:39.414 "dif_insert_or_strip": false, 00:04:39.414 "zcopy": false, 00:04:39.414 "c2h_success": true, 00:04:39.414 "sock_priority": 0, 00:04:39.414 "abort_timeout_sec": 1, 00:04:39.414 "ack_timeout": 0, 00:04:39.414 "data_wr_pool_size": 0 00:04:39.414 } 00:04:39.414 } 00:04:39.414 ] 00:04:39.414 }, 00:04:39.414 { 00:04:39.414 "subsystem": "iscsi", 00:04:39.414 "config": [ 00:04:39.414 { 00:04:39.414 "method": "iscsi_set_options", 00:04:39.414 "params": { 00:04:39.414 "node_base": "iqn.2016-06.io.spdk", 00:04:39.414 "max_sessions": 128, 00:04:39.414 "max_connections_per_session": 2, 00:04:39.414 "max_queue_depth": 64, 00:04:39.414 "default_time2wait": 2, 00:04:39.414 "default_time2retain": 20, 00:04:39.414 "first_burst_length": 8192, 00:04:39.414 "immediate_data": true, 00:04:39.414 "allow_duplicated_isid": false, 00:04:39.414 "error_recovery_level": 0, 00:04:39.414 "nop_timeout": 60, 00:04:39.414 "nop_in_interval": 30, 00:04:39.414 "disable_chap": false, 00:04:39.414 "require_chap": false, 00:04:39.414 "mutual_chap": false, 00:04:39.414 "chap_group": 0, 00:04:39.414 "max_large_datain_per_connection": 64, 00:04:39.414 "max_r2t_per_connection": 4, 00:04:39.414 "pdu_pool_size": 36864, 00:04:39.414 "immediate_data_pool_size": 16384, 00:04:39.414 "data_out_pool_size": 2048 00:04:39.414 } 00:04:39.414 } 00:04:39.414 ] 00:04:39.414 } 00:04:39.414 ] 00:04:39.414 } 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57807 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57807 ']' 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57807 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57807 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.414 killing process with pid 57807 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57807' 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57807 00:04:39.414 03:55:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57807 00:04:40.799 03:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.799 03:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57847 00:04:40.799 03:55:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.074 03:55:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57847 00:04:46.074 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57847 ']' 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57847 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57847 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.075 killing process with pid 57847 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57847' 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57847 00:04:46.075 03:55:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57847 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.690 00:04:46.690 real 0m8.385s 00:04:46.690 user 0m8.023s 00:04:46.690 sys 0m0.539s 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.690 ************************************ 00:04:46.690 END TEST skip_rpc_with_json 00:04:46.690 ************************************ 00:04:46.690 03:55:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:46.690 03:55:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.690 03:55:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.690 03:55:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.690 ************************************ 00:04:46.690 START TEST skip_rpc_with_delay 00:04:46.690 ************************************ 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.690 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.691 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.691 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.691 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.691 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:46.691 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.951 [2024-10-13 03:55:39.885842] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:46.952 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:46.952 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.952 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.952 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.952 00:04:46.952 real 0m0.107s 00:04:46.952 user 0m0.059s 00:04:46.952 sys 0m0.047s 00:04:46.952 ************************************ 00:04:46.952 END TEST skip_rpc_with_delay 00:04:46.952 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.952 03:55:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:46.952 ************************************ 00:04:46.952 03:55:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:46.952 03:55:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:46.952 03:55:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:46.952 03:55:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.952 03:55:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.952 03:55:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.952 ************************************ 00:04:46.952 START TEST exit_on_failed_rpc_init 00:04:46.952 ************************************ 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57964 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57964 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57964 ']' 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.952 03:55:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.952 [2024-10-13 03:55:40.057110] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:46.952 [2024-10-13 03:55:40.057206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57964 ] 00:04:47.213 [2024-10-13 03:55:40.193410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.213 [2024-10-13 03:55:40.301632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:48.157 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.157 [2024-10-13 03:55:41.098181] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:48.157 [2024-10-13 03:55:41.098331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57982 ] 00:04:48.157 [2024-10-13 03:55:41.250816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.419 [2024-10-13 03:55:41.377969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.419 [2024-10-13 03:55:41.378069] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:48.419 [2024-10-13 03:55:41.378084] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:48.419 [2024-10-13 03:55:41.378099] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57964 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57964 ']' 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57964 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57964 00:04:48.680 killing process with pid 57964 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57964' 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57964 00:04:48.680 03:55:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57964 00:04:49.622 ************************************ 00:04:49.622 END TEST exit_on_failed_rpc_init 00:04:49.622 ************************************ 00:04:49.622 00:04:49.622 real 0m2.785s 00:04:49.622 user 0m3.063s 00:04:49.622 sys 0m0.508s 00:04:49.622 03:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.622 03:55:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.883 03:55:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.883 ************************************ 00:04:49.883 END TEST skip_rpc 00:04:49.883 ************************************ 00:04:49.883 00:04:49.883 real 0m17.854s 00:04:49.883 user 0m17.138s 00:04:49.883 sys 0m1.529s 00:04:49.883 03:55:42 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.883 03:55:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.883 03:55:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.883 03:55:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.883 03:55:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.883 03:55:42 -- common/autotest_common.sh@10 -- # set +x 00:04:49.883 ************************************ 00:04:49.883 START TEST rpc_client 00:04:49.883 ************************************ 00:04:49.883 03:55:42 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:49.883 * Looking for test storage... 00:04:49.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:49.883 03:55:42 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.883 03:55:42 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.883 03:55:42 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.883 03:55:42 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.883 03:55:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.883 03:55:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:49.883 03:55:43 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.883 03:55:43 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.883 --rc genhtml_branch_coverage=1 00:04:49.883 --rc genhtml_function_coverage=1 00:04:49.883 --rc genhtml_legend=1 00:04:49.883 --rc geninfo_all_blocks=1 00:04:49.883 --rc geninfo_unexecuted_blocks=1 00:04:49.883 00:04:49.883 ' 00:04:49.883 03:55:43 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.883 --rc genhtml_branch_coverage=1 00:04:49.883 --rc genhtml_function_coverage=1 00:04:49.883 --rc genhtml_legend=1 00:04:49.883 --rc geninfo_all_blocks=1 00:04:49.883 --rc geninfo_unexecuted_blocks=1 00:04:49.883 00:04:49.883 ' 00:04:49.883 03:55:43 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.883 --rc genhtml_branch_coverage=1 00:04:49.883 --rc genhtml_function_coverage=1 00:04:49.883 --rc genhtml_legend=1 00:04:49.883 --rc geninfo_all_blocks=1 00:04:49.883 --rc geninfo_unexecuted_blocks=1 00:04:49.883 00:04:49.883 ' 00:04:49.883 03:55:43 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.883 --rc genhtml_branch_coverage=1 00:04:49.883 --rc genhtml_function_coverage=1 00:04:49.883 --rc genhtml_legend=1 00:04:49.883 --rc geninfo_all_blocks=1 00:04:49.883 --rc geninfo_unexecuted_blocks=1 00:04:49.883 00:04:49.883 ' 00:04:49.883 03:55:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:50.145 OK 00:04:50.145 03:55:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.145 00:04:50.145 real 0m0.188s 00:04:50.145 user 0m0.108s 00:04:50.145 sys 0m0.084s 00:04:50.145 03:55:43 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.145 03:55:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.145 ************************************ 00:04:50.145 END TEST rpc_client 00:04:50.145 ************************************ 00:04:50.145 03:55:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.145 03:55:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.145 03:55:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.145 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:50.145 ************************************ 00:04:50.145 START TEST json_config 00:04:50.145 ************************************ 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.145 03:55:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.145 03:55:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.145 03:55:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.145 03:55:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.145 03:55:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.145 03:55:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.145 03:55:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.145 03:55:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:50.145 03:55:43 json_config -- scripts/common.sh@345 -- # : 1 00:04:50.145 03:55:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.145 03:55:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.145 03:55:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:50.145 03:55:43 json_config -- scripts/common.sh@353 -- # local d=1 00:04:50.145 03:55:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.145 03:55:43 json_config -- scripts/common.sh@355 -- # echo 1 00:04:50.145 03:55:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.145 03:55:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@353 -- # local d=2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.145 03:55:43 json_config -- scripts/common.sh@355 -- # echo 2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.145 03:55:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.145 03:55:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.145 03:55:43 json_config -- scripts/common.sh@368 -- # return 0 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.145 --rc genhtml_branch_coverage=1 00:04:50.145 --rc genhtml_function_coverage=1 00:04:50.145 --rc genhtml_legend=1 00:04:50.145 --rc geninfo_all_blocks=1 00:04:50.145 --rc geninfo_unexecuted_blocks=1 00:04:50.145 00:04:50.145 ' 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.145 --rc genhtml_branch_coverage=1 00:04:50.145 --rc genhtml_function_coverage=1 00:04:50.145 --rc genhtml_legend=1 00:04:50.145 --rc geninfo_all_blocks=1 00:04:50.145 --rc geninfo_unexecuted_blocks=1 00:04:50.145 00:04:50.145 ' 00:04:50.145 03:55:43 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.145 --rc genhtml_branch_coverage=1 00:04:50.146 --rc genhtml_function_coverage=1 00:04:50.146 --rc genhtml_legend=1 00:04:50.146 --rc geninfo_all_blocks=1 00:04:50.146 --rc geninfo_unexecuted_blocks=1 00:04:50.146 00:04:50.146 ' 00:04:50.146 03:55:43 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.146 --rc genhtml_branch_coverage=1 00:04:50.146 --rc genhtml_function_coverage=1 00:04:50.146 --rc genhtml_legend=1 00:04:50.146 --rc geninfo_all_blocks=1 00:04:50.146 --rc geninfo_unexecuted_blocks=1 00:04:50.146 00:04:50.146 ' 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a36ad734-f3af-489b-a2e0-f6600d3595d9 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a36ad734-f3af-489b-a2e0-f6600d3595d9 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.146 03:55:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.146 03:55:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.146 03:55:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.146 03:55:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.146 03:55:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.146 03:55:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.146 03:55:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.146 03:55:43 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.146 03:55:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@51 -- # : 0 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.146 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.146 03:55:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:50.146 WARNING: No tests are enabled so not running JSON configuration tests 00:04:50.146 03:55:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:50.146 00:04:50.146 real 0m0.142s 00:04:50.146 user 0m0.082s 00:04:50.146 sys 0m0.061s 00:04:50.146 ************************************ 00:04:50.146 END TEST json_config 00:04:50.146 ************************************ 00:04:50.146 03:55:43 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.146 03:55:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.146 03:55:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.146 03:55:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.146 03:55:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.146 03:55:43 -- common/autotest_common.sh@10 -- # set +x 00:04:50.146 ************************************ 00:04:50.146 START TEST json_config_extra_key 00:04:50.146 ************************************ 00:04:50.146 03:55:43 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.409 03:55:43 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.409 03:55:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.409 03:55:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.409 03:55:43 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.409 03:55:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.409 03:55:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.409 03:55:43 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.409 --rc genhtml_branch_coverage=1 00:04:50.409 --rc genhtml_function_coverage=1 00:04:50.409 --rc genhtml_legend=1 00:04:50.409 --rc geninfo_all_blocks=1 00:04:50.409 --rc geninfo_unexecuted_blocks=1 00:04:50.409 00:04:50.409 ' 00:04:50.409 03:55:43 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.409 --rc genhtml_branch_coverage=1 00:04:50.410 --rc genhtml_function_coverage=1 00:04:50.410 --rc genhtml_legend=1 00:04:50.410 --rc geninfo_all_blocks=1 00:04:50.410 --rc geninfo_unexecuted_blocks=1 00:04:50.410 00:04:50.410 ' 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.410 --rc genhtml_branch_coverage=1 00:04:50.410 --rc genhtml_function_coverage=1 00:04:50.410 --rc genhtml_legend=1 00:04:50.410 --rc geninfo_all_blocks=1 00:04:50.410 --rc geninfo_unexecuted_blocks=1 00:04:50.410 00:04:50.410 ' 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.410 --rc genhtml_branch_coverage=1 00:04:50.410 --rc genhtml_function_coverage=1 00:04:50.410 --rc genhtml_legend=1 00:04:50.410 --rc geninfo_all_blocks=1 00:04:50.410 --rc geninfo_unexecuted_blocks=1 00:04:50.410 00:04:50.410 ' 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a36ad734-f3af-489b-a2e0-f6600d3595d9 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a36ad734-f3af-489b-a2e0-f6600d3595d9 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:50.410 03:55:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.410 03:55:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.410 03:55:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.410 03:55:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.410 03:55:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.410 03:55:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.410 03:55:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.410 03:55:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.410 03:55:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.410 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.410 03:55:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.410 INFO: launching applications... 00:04:50.410 03:55:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58175 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.410 Waiting for target to run... 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58175 /var/tmp/spdk_tgt.sock 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58175 ']' 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.410 03:55:43 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.410 03:55:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.410 [2024-10-13 03:55:43.483873] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:50.410 [2024-10-13 03:55:43.484159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58175 ] 00:04:50.980 [2024-10-13 03:55:43.846641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.980 [2024-10-13 03:55:43.968160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.550 03:55:44 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.550 03:55:44 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:51.550 00:04:51.550 03:55:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.551 03:55:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.551 INFO: shutting down applications... 00:04:51.551 03:55:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58175 ]] 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58175 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58175 00:04:51.551 03:55:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.122 03:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.122 03:55:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.122 03:55:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58175 00:04:52.122 03:55:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.434 03:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.434 03:55:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.434 03:55:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58175 00:04:52.434 03:55:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.006 03:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.006 03:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.006 03:55:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58175 00:04:53.006 03:55:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.579 03:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.579 03:55:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.579 SPDK target shutdown done 00:04:53.579 Success 00:04:53.579 03:55:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58175 00:04:53.579 03:55:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.579 03:55:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.579 03:55:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.579 03:55:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.579 03:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.579 00:04:53.579 real 0m3.239s 00:04:53.579 user 0m2.605s 00:04:53.579 sys 0m0.452s 00:04:53.579 ************************************ 00:04:53.579 END TEST json_config_extra_key 00:04:53.579 ************************************ 00:04:53.579 03:55:46 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.579 03:55:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.579 03:55:46 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.579 03:55:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.579 03:55:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.579 03:55:46 -- common/autotest_common.sh@10 -- # set +x 00:04:53.579 ************************************ 00:04:53.579 START TEST alias_rpc 00:04:53.579 ************************************ 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.579 * Looking for test storage... 00:04:53.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.579 03:55:46 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.579 --rc genhtml_branch_coverage=1 00:04:53.579 --rc genhtml_function_coverage=1 00:04:53.579 --rc genhtml_legend=1 00:04:53.579 --rc geninfo_all_blocks=1 00:04:53.579 --rc geninfo_unexecuted_blocks=1 00:04:53.579 00:04:53.579 ' 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.579 --rc genhtml_branch_coverage=1 00:04:53.579 --rc genhtml_function_coverage=1 00:04:53.579 --rc genhtml_legend=1 00:04:53.579 --rc geninfo_all_blocks=1 00:04:53.579 --rc geninfo_unexecuted_blocks=1 00:04:53.579 00:04:53.579 ' 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.579 --rc genhtml_branch_coverage=1 00:04:53.579 --rc genhtml_function_coverage=1 00:04:53.579 --rc genhtml_legend=1 00:04:53.579 --rc geninfo_all_blocks=1 00:04:53.579 --rc geninfo_unexecuted_blocks=1 00:04:53.579 00:04:53.579 ' 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.579 --rc genhtml_branch_coverage=1 00:04:53.579 --rc genhtml_function_coverage=1 00:04:53.579 --rc genhtml_legend=1 00:04:53.579 --rc geninfo_all_blocks=1 00:04:53.579 --rc geninfo_unexecuted_blocks=1 00:04:53.579 00:04:53.579 ' 00:04:53.579 03:55:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.579 03:55:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58268 00:04:53.579 03:55:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58268 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58268 ']' 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.579 03:55:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.579 03:55:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.840 [2024-10-13 03:55:46.755937] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:53.840 [2024-10-13 03:55:46.756050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:04:53.840 [2024-10-13 03:55:46.905193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.101 [2024-10-13 03:55:47.001579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.672 03:55:47 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.672 03:55:47 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:54.672 03:55:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:54.672 03:55:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58268 00:04:54.672 03:55:47 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58268 ']' 00:04:54.672 03:55:47 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58268 00:04:54.672 03:55:47 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:54.672 03:55:47 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.672 03:55:47 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58268 00:04:54.939 killing process with pid 58268 00:04:54.940 03:55:47 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.940 03:55:47 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.940 03:55:47 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58268' 00:04:54.940 03:55:47 alias_rpc -- common/autotest_common.sh@969 -- # kill 58268 00:04:54.940 03:55:47 alias_rpc -- common/autotest_common.sh@974 -- # wait 58268 00:04:56.331 00:04:56.331 real 0m2.616s 00:04:56.331 user 0m2.669s 00:04:56.331 sys 0m0.390s 00:04:56.331 03:55:49 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.331 ************************************ 00:04:56.331 END TEST alias_rpc 00:04:56.331 ************************************ 00:04:56.331 03:55:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.331 03:55:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:56.331 03:55:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.331 03:55:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.331 03:55:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.331 03:55:49 -- common/autotest_common.sh@10 -- # set +x 00:04:56.331 ************************************ 00:04:56.331 START TEST spdkcli_tcp 00:04:56.331 ************************************ 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.331 * Looking for test storage... 00:04:56.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.331 03:55:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.331 --rc genhtml_branch_coverage=1 00:04:56.331 --rc genhtml_function_coverage=1 00:04:56.331 --rc genhtml_legend=1 00:04:56.331 --rc geninfo_all_blocks=1 00:04:56.331 --rc geninfo_unexecuted_blocks=1 00:04:56.331 00:04:56.331 ' 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.331 --rc genhtml_branch_coverage=1 00:04:56.331 --rc genhtml_function_coverage=1 00:04:56.331 --rc genhtml_legend=1 00:04:56.331 --rc geninfo_all_blocks=1 00:04:56.331 --rc geninfo_unexecuted_blocks=1 00:04:56.331 00:04:56.331 ' 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.331 --rc genhtml_branch_coverage=1 00:04:56.331 --rc genhtml_function_coverage=1 00:04:56.331 --rc genhtml_legend=1 00:04:56.331 --rc geninfo_all_blocks=1 00:04:56.331 --rc geninfo_unexecuted_blocks=1 00:04:56.331 00:04:56.331 ' 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.331 --rc genhtml_branch_coverage=1 00:04:56.331 --rc genhtml_function_coverage=1 00:04:56.331 --rc genhtml_legend=1 00:04:56.331 --rc geninfo_all_blocks=1 00:04:56.331 --rc geninfo_unexecuted_blocks=1 00:04:56.331 00:04:56.331 ' 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58359 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58359 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58359 ']' 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.331 03:55:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.331 03:55:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.332 03:55:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.332 03:55:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.332 [2024-10-13 03:55:49.453790] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:56.332 [2024-10-13 03:55:49.453925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58359 ] 00:04:56.590 [2024-10-13 03:55:49.607435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.590 [2024-10-13 03:55:49.686138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.590 [2024-10-13 03:55:49.686279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.525 03:55:50 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.525 03:55:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:57.525 03:55:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58376 00:04:57.525 03:55:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:57.525 03:55:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:57.525 [ 00:04:57.525 "bdev_malloc_delete", 00:04:57.525 "bdev_malloc_create", 00:04:57.525 "bdev_null_resize", 00:04:57.525 "bdev_null_delete", 00:04:57.525 "bdev_null_create", 00:04:57.525 "bdev_nvme_cuse_unregister", 00:04:57.525 "bdev_nvme_cuse_register", 00:04:57.525 "bdev_opal_new_user", 00:04:57.525 "bdev_opal_set_lock_state", 00:04:57.525 "bdev_opal_delete", 00:04:57.525 "bdev_opal_get_info", 00:04:57.525 "bdev_opal_create", 00:04:57.525 "bdev_nvme_opal_revert", 00:04:57.525 "bdev_nvme_opal_init", 00:04:57.525 "bdev_nvme_send_cmd", 00:04:57.525 "bdev_nvme_set_keys", 00:04:57.525 "bdev_nvme_get_path_iostat", 00:04:57.525 "bdev_nvme_get_mdns_discovery_info", 00:04:57.525 "bdev_nvme_stop_mdns_discovery", 00:04:57.525 "bdev_nvme_start_mdns_discovery", 00:04:57.525 "bdev_nvme_set_multipath_policy", 00:04:57.525 "bdev_nvme_set_preferred_path", 00:04:57.525 "bdev_nvme_get_io_paths", 00:04:57.525 "bdev_nvme_remove_error_injection", 00:04:57.525 "bdev_nvme_add_error_injection", 00:04:57.525 "bdev_nvme_get_discovery_info", 00:04:57.525 "bdev_nvme_stop_discovery", 00:04:57.525 "bdev_nvme_start_discovery", 00:04:57.525 "bdev_nvme_get_controller_health_info", 00:04:57.525 "bdev_nvme_disable_controller", 00:04:57.525 "bdev_nvme_enable_controller", 00:04:57.525 "bdev_nvme_reset_controller", 00:04:57.525 "bdev_nvme_get_transport_statistics", 00:04:57.525 "bdev_nvme_apply_firmware", 00:04:57.525 "bdev_nvme_detach_controller", 00:04:57.525 "bdev_nvme_get_controllers", 00:04:57.525 "bdev_nvme_attach_controller", 00:04:57.525 "bdev_nvme_set_hotplug", 00:04:57.525 "bdev_nvme_set_options", 00:04:57.525 "bdev_passthru_delete", 00:04:57.525 "bdev_passthru_create", 00:04:57.525 "bdev_lvol_set_parent_bdev", 00:04:57.525 "bdev_lvol_set_parent", 00:04:57.525 "bdev_lvol_check_shallow_copy", 00:04:57.525 "bdev_lvol_start_shallow_copy", 00:04:57.525 "bdev_lvol_grow_lvstore", 00:04:57.525 "bdev_lvol_get_lvols", 00:04:57.525 "bdev_lvol_get_lvstores", 00:04:57.525 "bdev_lvol_delete", 00:04:57.525 "bdev_lvol_set_read_only", 00:04:57.525 "bdev_lvol_resize", 00:04:57.525 "bdev_lvol_decouple_parent", 00:04:57.525 "bdev_lvol_inflate", 00:04:57.525 "bdev_lvol_rename", 00:04:57.525 "bdev_lvol_clone_bdev", 00:04:57.525 "bdev_lvol_clone", 00:04:57.526 "bdev_lvol_snapshot", 00:04:57.526 "bdev_lvol_create", 00:04:57.526 "bdev_lvol_delete_lvstore", 00:04:57.526 "bdev_lvol_rename_lvstore", 00:04:57.526 "bdev_lvol_create_lvstore", 00:04:57.526 "bdev_raid_set_options", 00:04:57.526 "bdev_raid_remove_base_bdev", 00:04:57.526 "bdev_raid_add_base_bdev", 00:04:57.526 "bdev_raid_delete", 00:04:57.526 "bdev_raid_create", 00:04:57.526 "bdev_raid_get_bdevs", 00:04:57.526 "bdev_error_inject_error", 00:04:57.526 "bdev_error_delete", 00:04:57.526 "bdev_error_create", 00:04:57.526 "bdev_split_delete", 00:04:57.526 "bdev_split_create", 00:04:57.526 "bdev_delay_delete", 00:04:57.526 "bdev_delay_create", 00:04:57.526 "bdev_delay_update_latency", 00:04:57.526 "bdev_zone_block_delete", 00:04:57.526 "bdev_zone_block_create", 00:04:57.526 "blobfs_create", 00:04:57.526 "blobfs_detect", 00:04:57.526 "blobfs_set_cache_size", 00:04:57.526 "bdev_xnvme_delete", 00:04:57.526 "bdev_xnvme_create", 00:04:57.526 "bdev_aio_delete", 00:04:57.526 "bdev_aio_rescan", 00:04:57.526 "bdev_aio_create", 00:04:57.526 "bdev_ftl_set_property", 00:04:57.526 "bdev_ftl_get_properties", 00:04:57.526 "bdev_ftl_get_stats", 00:04:57.526 "bdev_ftl_unmap", 00:04:57.526 "bdev_ftl_unload", 00:04:57.526 "bdev_ftl_delete", 00:04:57.526 "bdev_ftl_load", 00:04:57.526 "bdev_ftl_create", 00:04:57.526 "bdev_virtio_attach_controller", 00:04:57.526 "bdev_virtio_scsi_get_devices", 00:04:57.526 "bdev_virtio_detach_controller", 00:04:57.526 "bdev_virtio_blk_set_hotplug", 00:04:57.526 "bdev_iscsi_delete", 00:04:57.526 "bdev_iscsi_create", 00:04:57.526 "bdev_iscsi_set_options", 00:04:57.526 "accel_error_inject_error", 00:04:57.526 "ioat_scan_accel_module", 00:04:57.526 "dsa_scan_accel_module", 00:04:57.526 "iaa_scan_accel_module", 00:04:57.526 "keyring_file_remove_key", 00:04:57.526 "keyring_file_add_key", 00:04:57.526 "keyring_linux_set_options", 00:04:57.526 "fsdev_aio_delete", 00:04:57.526 "fsdev_aio_create", 00:04:57.526 "iscsi_get_histogram", 00:04:57.526 "iscsi_enable_histogram", 00:04:57.526 "iscsi_set_options", 00:04:57.526 "iscsi_get_auth_groups", 00:04:57.526 "iscsi_auth_group_remove_secret", 00:04:57.526 "iscsi_auth_group_add_secret", 00:04:57.526 "iscsi_delete_auth_group", 00:04:57.526 "iscsi_create_auth_group", 00:04:57.526 "iscsi_set_discovery_auth", 00:04:57.526 "iscsi_get_options", 00:04:57.526 "iscsi_target_node_request_logout", 00:04:57.526 "iscsi_target_node_set_redirect", 00:04:57.526 "iscsi_target_node_set_auth", 00:04:57.526 "iscsi_target_node_add_lun", 00:04:57.526 "iscsi_get_stats", 00:04:57.526 "iscsi_get_connections", 00:04:57.526 "iscsi_portal_group_set_auth", 00:04:57.526 "iscsi_start_portal_group", 00:04:57.526 "iscsi_delete_portal_group", 00:04:57.526 "iscsi_create_portal_group", 00:04:57.526 "iscsi_get_portal_groups", 00:04:57.526 "iscsi_delete_target_node", 00:04:57.526 "iscsi_target_node_remove_pg_ig_maps", 00:04:57.526 "iscsi_target_node_add_pg_ig_maps", 00:04:57.526 "iscsi_create_target_node", 00:04:57.526 "iscsi_get_target_nodes", 00:04:57.526 "iscsi_delete_initiator_group", 00:04:57.526 "iscsi_initiator_group_remove_initiators", 00:04:57.526 "iscsi_initiator_group_add_initiators", 00:04:57.526 "iscsi_create_initiator_group", 00:04:57.526 "iscsi_get_initiator_groups", 00:04:57.526 "nvmf_set_crdt", 00:04:57.526 "nvmf_set_config", 00:04:57.526 "nvmf_set_max_subsystems", 00:04:57.526 "nvmf_stop_mdns_prr", 00:04:57.526 "nvmf_publish_mdns_prr", 00:04:57.526 "nvmf_subsystem_get_listeners", 00:04:57.526 "nvmf_subsystem_get_qpairs", 00:04:57.526 "nvmf_subsystem_get_controllers", 00:04:57.526 "nvmf_get_stats", 00:04:57.526 "nvmf_get_transports", 00:04:57.526 "nvmf_create_transport", 00:04:57.526 "nvmf_get_targets", 00:04:57.526 "nvmf_delete_target", 00:04:57.526 "nvmf_create_target", 00:04:57.526 "nvmf_subsystem_allow_any_host", 00:04:57.526 "nvmf_subsystem_set_keys", 00:04:57.526 "nvmf_subsystem_remove_host", 00:04:57.526 "nvmf_subsystem_add_host", 00:04:57.526 "nvmf_ns_remove_host", 00:04:57.526 "nvmf_ns_add_host", 00:04:57.526 "nvmf_subsystem_remove_ns", 00:04:57.526 "nvmf_subsystem_set_ns_ana_group", 00:04:57.526 "nvmf_subsystem_add_ns", 00:04:57.526 "nvmf_subsystem_listener_set_ana_state", 00:04:57.526 "nvmf_discovery_get_referrals", 00:04:57.526 "nvmf_discovery_remove_referral", 00:04:57.526 "nvmf_discovery_add_referral", 00:04:57.526 "nvmf_subsystem_remove_listener", 00:04:57.526 "nvmf_subsystem_add_listener", 00:04:57.526 "nvmf_delete_subsystem", 00:04:57.526 "nvmf_create_subsystem", 00:04:57.526 "nvmf_get_subsystems", 00:04:57.526 "env_dpdk_get_mem_stats", 00:04:57.526 "nbd_get_disks", 00:04:57.526 "nbd_stop_disk", 00:04:57.526 "nbd_start_disk", 00:04:57.526 "ublk_recover_disk", 00:04:57.526 "ublk_get_disks", 00:04:57.526 "ublk_stop_disk", 00:04:57.526 "ublk_start_disk", 00:04:57.526 "ublk_destroy_target", 00:04:57.526 "ublk_create_target", 00:04:57.526 "virtio_blk_create_transport", 00:04:57.526 "virtio_blk_get_transports", 00:04:57.526 "vhost_controller_set_coalescing", 00:04:57.526 "vhost_get_controllers", 00:04:57.526 "vhost_delete_controller", 00:04:57.526 "vhost_create_blk_controller", 00:04:57.526 "vhost_scsi_controller_remove_target", 00:04:57.526 "vhost_scsi_controller_add_target", 00:04:57.526 "vhost_start_scsi_controller", 00:04:57.526 "vhost_create_scsi_controller", 00:04:57.526 "thread_set_cpumask", 00:04:57.526 "scheduler_set_options", 00:04:57.526 "framework_get_governor", 00:04:57.526 "framework_get_scheduler", 00:04:57.526 "framework_set_scheduler", 00:04:57.526 "framework_get_reactors", 00:04:57.526 "thread_get_io_channels", 00:04:57.526 "thread_get_pollers", 00:04:57.526 "thread_get_stats", 00:04:57.526 "framework_monitor_context_switch", 00:04:57.526 "spdk_kill_instance", 00:04:57.526 "log_enable_timestamps", 00:04:57.526 "log_get_flags", 00:04:57.526 "log_clear_flag", 00:04:57.526 "log_set_flag", 00:04:57.526 "log_get_level", 00:04:57.526 "log_set_level", 00:04:57.526 "log_get_print_level", 00:04:57.526 "log_set_print_level", 00:04:57.526 "framework_enable_cpumask_locks", 00:04:57.526 "framework_disable_cpumask_locks", 00:04:57.526 "framework_wait_init", 00:04:57.526 "framework_start_init", 00:04:57.526 "scsi_get_devices", 00:04:57.526 "bdev_get_histogram", 00:04:57.526 "bdev_enable_histogram", 00:04:57.526 "bdev_set_qos_limit", 00:04:57.526 "bdev_set_qd_sampling_period", 00:04:57.526 "bdev_get_bdevs", 00:04:57.526 "bdev_reset_iostat", 00:04:57.526 "bdev_get_iostat", 00:04:57.526 "bdev_examine", 00:04:57.526 "bdev_wait_for_examine", 00:04:57.526 "bdev_set_options", 00:04:57.526 "accel_get_stats", 00:04:57.526 "accel_set_options", 00:04:57.526 "accel_set_driver", 00:04:57.526 "accel_crypto_key_destroy", 00:04:57.526 "accel_crypto_keys_get", 00:04:57.526 "accel_crypto_key_create", 00:04:57.526 "accel_assign_opc", 00:04:57.526 "accel_get_module_info", 00:04:57.526 "accel_get_opc_assignments", 00:04:57.526 "vmd_rescan", 00:04:57.526 "vmd_remove_device", 00:04:57.526 "vmd_enable", 00:04:57.526 "sock_get_default_impl", 00:04:57.526 "sock_set_default_impl", 00:04:57.526 "sock_impl_set_options", 00:04:57.526 "sock_impl_get_options", 00:04:57.526 "iobuf_get_stats", 00:04:57.526 "iobuf_set_options", 00:04:57.526 "keyring_get_keys", 00:04:57.526 "framework_get_pci_devices", 00:04:57.526 "framework_get_config", 00:04:57.526 "framework_get_subsystems", 00:04:57.526 "fsdev_set_opts", 00:04:57.526 "fsdev_get_opts", 00:04:57.526 "trace_get_info", 00:04:57.526 "trace_get_tpoint_group_mask", 00:04:57.526 "trace_disable_tpoint_group", 00:04:57.526 "trace_enable_tpoint_group", 00:04:57.526 "trace_clear_tpoint_mask", 00:04:57.526 "trace_set_tpoint_mask", 00:04:57.526 "notify_get_notifications", 00:04:57.526 "notify_get_types", 00:04:57.526 "spdk_get_version", 00:04:57.526 "rpc_get_methods" 00:04:57.526 ] 00:04:57.526 03:55:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.526 03:55:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:57.526 03:55:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58359 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58359 ']' 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58359 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58359 00:04:57.526 03:55:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.527 03:55:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.527 03:55:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58359' 00:04:57.527 killing process with pid 58359 00:04:57.527 03:55:50 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58359 00:04:57.527 03:55:50 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58359 00:04:58.902 00:04:58.902 real 0m2.526s 00:04:58.902 user 0m4.590s 00:04:58.902 sys 0m0.417s 00:04:58.902 03:55:51 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.902 03:55:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.902 ************************************ 00:04:58.902 END TEST spdkcli_tcp 00:04:58.902 ************************************ 00:04:58.902 03:55:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.902 03:55:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.902 03:55:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.902 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:04:58.902 ************************************ 00:04:58.902 START TEST dpdk_mem_utility 00:04:58.902 ************************************ 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:58.902 * Looking for test storage... 00:04:58.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.902 03:55:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.902 --rc genhtml_branch_coverage=1 00:04:58.902 --rc genhtml_function_coverage=1 00:04:58.902 --rc genhtml_legend=1 00:04:58.902 --rc geninfo_all_blocks=1 00:04:58.902 --rc geninfo_unexecuted_blocks=1 00:04:58.902 00:04:58.902 ' 00:04:58.902 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.902 --rc genhtml_branch_coverage=1 00:04:58.902 --rc genhtml_function_coverage=1 00:04:58.902 --rc genhtml_legend=1 00:04:58.902 --rc geninfo_all_blocks=1 00:04:58.902 --rc geninfo_unexecuted_blocks=1 00:04:58.902 00:04:58.902 ' 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.903 --rc genhtml_branch_coverage=1 00:04:58.903 --rc genhtml_function_coverage=1 00:04:58.903 --rc genhtml_legend=1 00:04:58.903 --rc geninfo_all_blocks=1 00:04:58.903 --rc geninfo_unexecuted_blocks=1 00:04:58.903 00:04:58.903 ' 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.903 --rc genhtml_branch_coverage=1 00:04:58.903 --rc genhtml_function_coverage=1 00:04:58.903 --rc genhtml_legend=1 00:04:58.903 --rc geninfo_all_blocks=1 00:04:58.903 --rc geninfo_unexecuted_blocks=1 00:04:58.903 00:04:58.903 ' 00:04:58.903 03:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.903 03:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58470 00:04:58.903 03:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58470 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58470 ']' 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.903 03:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.903 03:55:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.903 [2024-10-13 03:55:52.027838] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:04:58.903 [2024-10-13 03:55:52.028114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58470 ] 00:04:59.161 [2024-10-13 03:55:52.176307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.161 [2024-10-13 03:55:52.253015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.727 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.727 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:59.727 03:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:59.727 03:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:59.727 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.727 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.727 { 00:04:59.727 "filename": "/tmp/spdk_mem_dump.txt" 00:04:59.727 } 00:04:59.727 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.727 03:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.986 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:59.986 1 heaps totaling size 816.000000 MiB 00:04:59.986 size: 816.000000 MiB heap id: 0 00:04:59.986 end heaps---------- 00:04:59.986 9 mempools totaling size 595.772034 MiB 00:04:59.986 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:59.986 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:59.986 size: 92.545471 MiB name: bdev_io_58470 00:04:59.986 size: 50.003479 MiB name: msgpool_58470 00:04:59.986 size: 36.509338 MiB name: fsdev_io_58470 00:04:59.986 size: 21.763794 MiB name: PDU_Pool 00:04:59.986 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:59.986 size: 4.133484 MiB name: evtpool_58470 00:04:59.986 size: 0.026123 MiB name: Session_Pool 00:04:59.986 end mempools------- 00:04:59.986 6 memzones totaling size 4.142822 MiB 00:04:59.986 size: 1.000366 MiB name: RG_ring_0_58470 00:04:59.986 size: 1.000366 MiB name: RG_ring_1_58470 00:04:59.986 size: 1.000366 MiB name: RG_ring_4_58470 00:04:59.986 size: 1.000366 MiB name: RG_ring_5_58470 00:04:59.986 size: 0.125366 MiB name: RG_ring_2_58470 00:04:59.986 size: 0.015991 MiB name: RG_ring_3_58470 00:04:59.986 end memzones------- 00:04:59.986 03:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:59.986 heap id: 0 total size: 816.000000 MiB number of busy elements: 328 number of free elements: 18 00:04:59.986 list of free elements. size: 16.788208 MiB 00:04:59.986 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:59.986 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:59.986 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:59.986 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:59.986 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:59.986 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:59.986 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:59.986 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:59.986 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:59.986 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:59.986 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:59.986 element at address: 0x20001ac00000 with size: 0.558777 MiB 00:04:59.986 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:59.986 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:59.986 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:59.986 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:59.986 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:59.986 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:59.986 list of standard malloc elements. size: 199.290894 MiB 00:04:59.986 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:59.986 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:59.986 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:59.986 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:59.986 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:59.986 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:59.986 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:59.986 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:59.986 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:59.986 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:59.986 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:59.986 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:59.986 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:59.987 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:59.988 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:59.988 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:59.988 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:59.988 list of memzone associated elements. size: 599.920898 MiB 00:04:59.988 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:59.988 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:59.988 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:59.988 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:59.988 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:59.988 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58470_0 00:04:59.988 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:59.988 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58470_0 00:04:59.988 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:59.988 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58470_0 00:04:59.988 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:59.988 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:59.988 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:59.988 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:59.988 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:59.988 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58470_0 00:04:59.988 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:59.988 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58470 00:04:59.988 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:59.988 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58470 00:04:59.988 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:59.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:59.988 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:59.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:59.988 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:59.988 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:59.988 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:59.988 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:59.988 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:59.989 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58470 00:04:59.989 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:59.989 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58470 00:04:59.989 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:59.989 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58470 00:04:59.989 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:59.989 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58470 00:04:59.989 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:59.989 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58470 00:04:59.989 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:59.989 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58470 00:04:59.989 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:59.989 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:59.989 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:59.989 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:59.989 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:59.989 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:59.989 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:59.989 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58470 00:04:59.989 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:59.989 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58470 00:04:59.989 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:59.989 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:59.989 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:59.989 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:59.989 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:59.989 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58470 00:04:59.989 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:59.989 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:59.989 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:59.989 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58470 00:04:59.989 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:59.989 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58470 00:04:59.989 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:59.989 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58470 00:04:59.989 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:59.989 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:59.989 03:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:59.989 03:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58470 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58470 ']' 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58470 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58470 00:04:59.989 killing process with pid 58470 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58470' 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58470 00:04:59.989 03:55:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58470 00:05:01.363 00:05:01.363 real 0m2.339s 00:05:01.363 user 0m2.363s 00:05:01.363 sys 0m0.371s 00:05:01.363 03:55:54 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.363 ************************************ 00:05:01.363 END TEST dpdk_mem_utility 00:05:01.363 ************************************ 00:05:01.363 03:55:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.363 03:55:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.363 03:55:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.363 03:55:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.363 03:55:54 -- common/autotest_common.sh@10 -- # set +x 00:05:01.363 ************************************ 00:05:01.363 START TEST event 00:05:01.364 ************************************ 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.364 * Looking for test storage... 00:05:01.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.364 03:55:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.364 03:55:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.364 03:55:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.364 03:55:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.364 03:55:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.364 03:55:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.364 03:55:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.364 03:55:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.364 03:55:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.364 03:55:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.364 03:55:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.364 03:55:54 event -- scripts/common.sh@344 -- # case "$op" in 00:05:01.364 03:55:54 event -- scripts/common.sh@345 -- # : 1 00:05:01.364 03:55:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.364 03:55:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.364 03:55:54 event -- scripts/common.sh@365 -- # decimal 1 00:05:01.364 03:55:54 event -- scripts/common.sh@353 -- # local d=1 00:05:01.364 03:55:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.364 03:55:54 event -- scripts/common.sh@355 -- # echo 1 00:05:01.364 03:55:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.364 03:55:54 event -- scripts/common.sh@366 -- # decimal 2 00:05:01.364 03:55:54 event -- scripts/common.sh@353 -- # local d=2 00:05:01.364 03:55:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.364 03:55:54 event -- scripts/common.sh@355 -- # echo 2 00:05:01.364 03:55:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.364 03:55:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.364 03:55:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.364 03:55:54 event -- scripts/common.sh@368 -- # return 0 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.364 --rc genhtml_branch_coverage=1 00:05:01.364 --rc genhtml_function_coverage=1 00:05:01.364 --rc genhtml_legend=1 00:05:01.364 --rc geninfo_all_blocks=1 00:05:01.364 --rc geninfo_unexecuted_blocks=1 00:05:01.364 00:05:01.364 ' 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.364 --rc genhtml_branch_coverage=1 00:05:01.364 --rc genhtml_function_coverage=1 00:05:01.364 --rc genhtml_legend=1 00:05:01.364 --rc geninfo_all_blocks=1 00:05:01.364 --rc geninfo_unexecuted_blocks=1 00:05:01.364 00:05:01.364 ' 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.364 --rc genhtml_branch_coverage=1 00:05:01.364 --rc genhtml_function_coverage=1 00:05:01.364 --rc genhtml_legend=1 00:05:01.364 --rc geninfo_all_blocks=1 00:05:01.364 --rc geninfo_unexecuted_blocks=1 00:05:01.364 00:05:01.364 ' 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.364 --rc genhtml_branch_coverage=1 00:05:01.364 --rc genhtml_function_coverage=1 00:05:01.364 --rc genhtml_legend=1 00:05:01.364 --rc geninfo_all_blocks=1 00:05:01.364 --rc geninfo_unexecuted_blocks=1 00:05:01.364 00:05:01.364 ' 00:05:01.364 03:55:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:01.364 03:55:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:01.364 03:55:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:01.364 03:55:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.364 03:55:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.364 ************************************ 00:05:01.364 START TEST event_perf 00:05:01.364 ************************************ 00:05:01.364 03:55:54 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.364 Running I/O for 1 seconds...[2024-10-13 03:55:54.370891] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:01.364 [2024-10-13 03:55:54.371044] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58556 ] 00:05:01.364 [2024-10-13 03:55:54.515671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.622 [2024-10-13 03:55:54.617005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.622 [2024-10-13 03:55:54.617303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.622 [2024-10-13 03:55:54.617684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.622 Running I/O for 1 seconds...[2024-10-13 03:55:54.617698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.996 00:05:02.996 lcore 0: 196657 00:05:02.996 lcore 1: 196659 00:05:02.996 lcore 2: 196657 00:05:02.996 lcore 3: 196660 00:05:02.996 done. 00:05:02.996 00:05:02.996 real 0m1.442s 00:05:02.996 user 0m4.244s 00:05:02.996 sys 0m0.080s 00:05:02.996 03:55:55 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.996 03:55:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.996 ************************************ 00:05:02.996 END TEST event_perf 00:05:02.996 ************************************ 00:05:02.996 03:55:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.996 03:55:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:02.996 03:55:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.996 03:55:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.996 ************************************ 00:05:02.996 START TEST event_reactor 00:05:02.996 ************************************ 00:05:02.996 03:55:55 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:02.996 [2024-10-13 03:55:55.871452] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:02.996 [2024-10-13 03:55:55.871674] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ] 00:05:02.996 [2024-10-13 03:55:56.019465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.996 [2024-10-13 03:55:56.117882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.369 test_start 00:05:04.369 oneshot 00:05:04.369 tick 100 00:05:04.369 tick 100 00:05:04.369 tick 250 00:05:04.369 tick 100 00:05:04.369 tick 100 00:05:04.369 tick 100 00:05:04.369 tick 250 00:05:04.369 tick 500 00:05:04.369 tick 100 00:05:04.369 tick 100 00:05:04.369 tick 250 00:05:04.369 tick 100 00:05:04.369 tick 100 00:05:04.369 test_end 00:05:04.369 00:05:04.370 real 0m1.435s 00:05:04.370 user 0m1.255s 00:05:04.370 sys 0m0.070s 00:05:04.370 03:55:57 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.370 03:55:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:04.370 ************************************ 00:05:04.370 END TEST event_reactor 00:05:04.370 ************************************ 00:05:04.370 03:55:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.370 03:55:57 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:04.370 03:55:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.370 03:55:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.370 ************************************ 00:05:04.370 START TEST event_reactor_perf 00:05:04.370 ************************************ 00:05:04.370 03:55:57 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:04.370 [2024-10-13 03:55:57.365627] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:04.370 [2024-10-13 03:55:57.365733] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58632 ] 00:05:04.370 [2024-10-13 03:55:57.514645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.628 [2024-10-13 03:55:57.612637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.001 test_start 00:05:06.001 test_end 00:05:06.001 Performance: 312911 events per second 00:05:06.001 00:05:06.001 real 0m1.429s 00:05:06.001 user 0m1.249s 00:05:06.001 sys 0m0.070s 00:05:06.001 03:55:58 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.001 03:55:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.001 ************************************ 00:05:06.001 END TEST event_reactor_perf 00:05:06.001 ************************************ 00:05:06.001 03:55:58 event -- event/event.sh@49 -- # uname -s 00:05:06.001 03:55:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.001 03:55:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:06.001 03:55:58 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.001 03:55:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.001 03:55:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.001 ************************************ 00:05:06.001 START TEST event_scheduler 00:05:06.001 ************************************ 00:05:06.001 03:55:58 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:06.001 * Looking for test storage... 00:05:06.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:06.001 03:55:58 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.001 03:55:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.001 03:55:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.001 03:55:58 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.001 03:55:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.002 03:55:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.002 --rc genhtml_branch_coverage=1 00:05:06.002 --rc genhtml_function_coverage=1 00:05:06.002 --rc genhtml_legend=1 00:05:06.002 --rc geninfo_all_blocks=1 00:05:06.002 --rc geninfo_unexecuted_blocks=1 00:05:06.002 00:05:06.002 ' 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.002 --rc genhtml_branch_coverage=1 00:05:06.002 --rc genhtml_function_coverage=1 00:05:06.002 --rc genhtml_legend=1 00:05:06.002 --rc geninfo_all_blocks=1 00:05:06.002 --rc geninfo_unexecuted_blocks=1 00:05:06.002 00:05:06.002 ' 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.002 --rc genhtml_branch_coverage=1 00:05:06.002 --rc genhtml_function_coverage=1 00:05:06.002 --rc genhtml_legend=1 00:05:06.002 --rc geninfo_all_blocks=1 00:05:06.002 --rc geninfo_unexecuted_blocks=1 00:05:06.002 00:05:06.002 ' 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.002 --rc genhtml_branch_coverage=1 00:05:06.002 --rc genhtml_function_coverage=1 00:05:06.002 --rc genhtml_legend=1 00:05:06.002 --rc geninfo_all_blocks=1 00:05:06.002 --rc geninfo_unexecuted_blocks=1 00:05:06.002 00:05:06.002 ' 00:05:06.002 03:55:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.002 03:55:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58702 00:05:06.002 03:55:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.002 03:55:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58702 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58702 ']' 00:05:06.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.002 03:55:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.002 03:55:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.002 [2024-10-13 03:55:59.045813] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:06.002 [2024-10-13 03:55:59.046094] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58702 ] 00:05:06.270 [2024-10-13 03:55:59.197275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.270 [2024-10-13 03:55:59.279556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.270 [2024-10-13 03:55:59.279733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.270 [2024-10-13 03:55:59.279967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.270 [2024-10-13 03:55:59.279967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.837 03:55:59 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.837 03:55:59 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:06.837 03:55:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:06.837 03:55:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.837 03:55:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.837 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.837 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.837 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.837 POWER: Cannot set governor of lcore 0 to performance 00:05:06.837 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.837 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.837 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:06.837 POWER: Cannot set governor of lcore 0 to userspace 00:05:06.837 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:06.837 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:06.837 POWER: Unable to set Power Management Environment for lcore 0 00:05:06.837 [2024-10-13 03:55:59.893677] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:06.837 [2024-10-13 03:55:59.893695] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:06.837 [2024-10-13 03:55:59.893704] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:06.837 [2024-10-13 03:55:59.893718] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:06.837 [2024-10-13 03:55:59.893724] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:06.837 [2024-10-13 03:55:59.893731] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:06.837 03:55:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.837 03:55:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:06.837 03:55:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.837 03:55:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 [2024-10-13 03:56:00.078258] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:07.095 03:56:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:07.095 03:56:00 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.095 03:56:00 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 ************************************ 00:05:07.095 START TEST scheduler_create_thread 00:05:07.095 ************************************ 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 2 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 3 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 4 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 5 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 6 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 7 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 8 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 9 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 10 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.095 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.660 ************************************ 00:05:07.660 END TEST scheduler_create_thread 00:05:07.660 ************************************ 00:05:07.660 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.660 00:05:07.660 real 0m0.591s 00:05:07.660 user 0m0.012s 00:05:07.660 sys 0m0.005s 00:05:07.661 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.661 03:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.661 03:56:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.661 03:56:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58702 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58702 ']' 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58702 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58702 00:05:07.661 killing process with pid 58702 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58702' 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58702 00:05:07.661 03:56:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58702 00:05:08.225 [2024-10-13 03:56:01.162081] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:08.791 00:05:08.791 real 0m2.904s 00:05:08.791 user 0m5.618s 00:05:08.791 sys 0m0.323s 00:05:08.791 ************************************ 00:05:08.791 END TEST event_scheduler 00:05:08.791 ************************************ 00:05:08.791 03:56:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.791 03:56:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.791 03:56:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:08.791 03:56:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:08.791 03:56:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.791 03:56:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.791 03:56:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.791 ************************************ 00:05:08.791 START TEST app_repeat 00:05:08.791 ************************************ 00:05:08.791 03:56:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:08.791 Process app_repeat pid: 58781 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58781 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58781' 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.791 spdk_app_start Round 0 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58781 /var/tmp/spdk-nbd.sock 00:05:08.791 03:56:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:08.791 03:56:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58781 ']' 00:05:08.791 03:56:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.791 03:56:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.791 03:56:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.791 03:56:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.791 03:56:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.791 [2024-10-13 03:56:01.831732] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:08.791 [2024-10-13 03:56:01.831837] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58781 ] 00:05:09.049 [2024-10-13 03:56:01.983150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.049 [2024-10-13 03:56:02.079712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.049 [2024-10-13 03:56:02.079868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.614 03:56:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.614 03:56:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:09.614 03:56:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.872 Malloc0 00:05:09.872 03:56:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.131 Malloc1 00:05:10.131 03:56:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.131 03:56:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.389 /dev/nbd0 00:05:10.390 03:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.390 03:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.390 1+0 records in 00:05:10.390 1+0 records out 00:05:10.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299023 s, 13.7 MB/s 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.390 03:56:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.390 03:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.390 03:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.390 03:56:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.648 /dev/nbd1 00:05:10.648 03:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.648 03:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.648 1+0 records in 00:05:10.648 1+0 records out 00:05:10.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360666 s, 11.4 MB/s 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.648 03:56:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.648 03:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.648 03:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.648 03:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.648 03:56:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.648 03:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.907 { 00:05:10.907 "nbd_device": "/dev/nbd0", 00:05:10.907 "bdev_name": "Malloc0" 00:05:10.907 }, 00:05:10.907 { 00:05:10.907 "nbd_device": "/dev/nbd1", 00:05:10.907 "bdev_name": "Malloc1" 00:05:10.907 } 00:05:10.907 ]' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.907 { 00:05:10.907 "nbd_device": "/dev/nbd0", 00:05:10.907 "bdev_name": "Malloc0" 00:05:10.907 }, 00:05:10.907 { 00:05:10.907 "nbd_device": "/dev/nbd1", 00:05:10.907 "bdev_name": "Malloc1" 00:05:10.907 } 00:05:10.907 ]' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.907 /dev/nbd1' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.907 /dev/nbd1' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.907 256+0 records in 00:05:10.907 256+0 records out 00:05:10.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00705615 s, 149 MB/s 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.907 256+0 records in 00:05:10.907 256+0 records out 00:05:10.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176948 s, 59.3 MB/s 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.907 256+0 records in 00:05:10.907 256+0 records out 00:05:10.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226291 s, 46.3 MB/s 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.907 03:56:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.165 03:56:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.424 03:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.682 03:56:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.682 03:56:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.942 03:56:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.508 [2024-10-13 03:56:05.632485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.766 [2024-10-13 03:56:05.722989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.766 [2024-10-13 03:56:05.722991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.766 [2024-10-13 03:56:05.832871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.766 [2024-10-13 03:56:05.832932] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.296 spdk_app_start Round 1 00:05:15.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.296 03:56:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.296 03:56:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.296 03:56:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58781 /var/tmp/spdk-nbd.sock 00:05:15.296 03:56:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58781 ']' 00:05:15.296 03:56:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.296 03:56:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.296 03:56:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.296 03:56:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.296 03:56:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.296 03:56:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.296 03:56:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:15.296 03:56:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.296 Malloc0 00:05:15.296 03:56:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.554 Malloc1 00:05:15.554 03:56:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.554 03:56:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.812 /dev/nbd0 00:05:15.812 03:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.812 03:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.812 1+0 records in 00:05:15.812 1+0 records out 00:05:15.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166761 s, 24.6 MB/s 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.812 03:56:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.812 03:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.812 03:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.812 03:56:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.071 /dev/nbd1 00:05:16.071 03:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.071 03:56:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.071 1+0 records in 00:05:16.071 1+0 records out 00:05:16.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185806 s, 22.0 MB/s 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:16.071 03:56:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:16.071 03:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.071 03:56:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.071 03:56:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.071 03:56:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.071 03:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.329 { 00:05:16.329 "nbd_device": "/dev/nbd0", 00:05:16.329 "bdev_name": "Malloc0" 00:05:16.329 }, 00:05:16.329 { 00:05:16.329 "nbd_device": "/dev/nbd1", 00:05:16.329 "bdev_name": "Malloc1" 00:05:16.329 } 00:05:16.329 ]' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.329 { 00:05:16.329 "nbd_device": "/dev/nbd0", 00:05:16.329 "bdev_name": "Malloc0" 00:05:16.329 }, 00:05:16.329 { 00:05:16.329 "nbd_device": "/dev/nbd1", 00:05:16.329 "bdev_name": "Malloc1" 00:05:16.329 } 00:05:16.329 ]' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.329 /dev/nbd1' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.329 /dev/nbd1' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.329 256+0 records in 00:05:16.329 256+0 records out 00:05:16.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00763923 s, 137 MB/s 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.329 256+0 records in 00:05:16.329 256+0 records out 00:05:16.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177912 s, 58.9 MB/s 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.329 256+0 records in 00:05:16.329 256+0 records out 00:05:16.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186907 s, 56.1 MB/s 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.329 03:56:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.588 03:56:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.847 03:56:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.847 03:56:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.848 03:56:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.848 03:56:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.417 03:56:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.676 [2024-10-13 03:56:10.826307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.934 [2024-10-13 03:56:10.893506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.934 [2024-10-13 03:56:10.893519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.934 [2024-10-13 03:56:10.989610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.934 [2024-10-13 03:56:10.989663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.467 spdk_app_start Round 2 00:05:20.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.467 03:56:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.467 03:56:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:20.467 03:56:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58781 /var/tmp/spdk-nbd.sock 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58781 ']' 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.467 03:56:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:20.467 03:56:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.725 Malloc0 00:05:20.725 03:56:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.983 Malloc1 00:05:20.983 03:56:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.983 03:56:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.983 03:56:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.984 03:56:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.242 /dev/nbd0 00:05:21.242 03:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.242 03:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.242 1+0 records in 00:05:21.242 1+0 records out 00:05:21.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339584 s, 12.1 MB/s 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:21.242 03:56:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:21.242 03:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.242 03:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.242 03:56:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.242 /dev/nbd1 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.501 1+0 records in 00:05:21.501 1+0 records out 00:05:21.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252999 s, 16.2 MB/s 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:21.501 03:56:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.501 { 00:05:21.501 "nbd_device": "/dev/nbd0", 00:05:21.501 "bdev_name": "Malloc0" 00:05:21.501 }, 00:05:21.501 { 00:05:21.501 "nbd_device": "/dev/nbd1", 00:05:21.501 "bdev_name": "Malloc1" 00:05:21.501 } 00:05:21.501 ]' 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.501 { 00:05:21.501 "nbd_device": "/dev/nbd0", 00:05:21.501 "bdev_name": "Malloc0" 00:05:21.501 }, 00:05:21.501 { 00:05:21.501 "nbd_device": "/dev/nbd1", 00:05:21.501 "bdev_name": "Malloc1" 00:05:21.501 } 00:05:21.501 ]' 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.501 /dev/nbd1' 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.501 /dev/nbd1' 00:05:21.501 03:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.760 256+0 records in 00:05:21.760 256+0 records out 00:05:21.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043049 s, 244 MB/s 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.760 256+0 records in 00:05:21.760 256+0 records out 00:05:21.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128072 s, 81.9 MB/s 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.760 256+0 records in 00:05:21.760 256+0 records out 00:05:21.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193397 s, 54.2 MB/s 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.760 03:56:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.018 03:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.277 03:56:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.277 03:56:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.535 03:56:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.101 [2024-10-13 03:56:16.113697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.101 [2024-10-13 03:56:16.183718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.101 [2024-10-13 03:56:16.183897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.359 [2024-10-13 03:56:16.279397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.359 [2024-10-13 03:56:16.279570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.887 03:56:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58781 /var/tmp/spdk-nbd.sock 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58781 ']' 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:25.887 03:56:18 event.app_repeat -- event/event.sh@39 -- # killprocess 58781 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58781 ']' 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58781 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58781 00:05:25.887 killing process with pid 58781 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58781' 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58781 00:05:25.887 03:56:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58781 00:05:26.189 spdk_app_start is called in Round 0. 00:05:26.189 Shutdown signal received, stop current app iteration 00:05:26.189 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:05:26.189 spdk_app_start is called in Round 1. 00:05:26.189 Shutdown signal received, stop current app iteration 00:05:26.189 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:05:26.189 spdk_app_start is called in Round 2. 00:05:26.189 Shutdown signal received, stop current app iteration 00:05:26.189 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 reinitialization... 00:05:26.189 spdk_app_start is called in Round 3. 00:05:26.189 Shutdown signal received, stop current app iteration 00:05:26.189 03:56:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:26.189 03:56:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:26.189 ************************************ 00:05:26.189 END TEST app_repeat 00:05:26.189 ************************************ 00:05:26.189 00:05:26.189 real 0m17.518s 00:05:26.189 user 0m38.344s 00:05:26.189 sys 0m1.995s 00:05:26.189 03:56:19 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.189 03:56:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.472 03:56:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:26.473 03:56:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:26.473 03:56:19 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.473 03:56:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.473 03:56:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.473 ************************************ 00:05:26.473 START TEST cpu_locks 00:05:26.473 ************************************ 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:26.473 * Looking for test storage... 00:05:26.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.473 03:56:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.473 --rc genhtml_branch_coverage=1 00:05:26.473 --rc genhtml_function_coverage=1 00:05:26.473 --rc genhtml_legend=1 00:05:26.473 --rc geninfo_all_blocks=1 00:05:26.473 --rc geninfo_unexecuted_blocks=1 00:05:26.473 00:05:26.473 ' 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.473 --rc genhtml_branch_coverage=1 00:05:26.473 --rc genhtml_function_coverage=1 00:05:26.473 --rc genhtml_legend=1 00:05:26.473 --rc geninfo_all_blocks=1 00:05:26.473 --rc geninfo_unexecuted_blocks=1 00:05:26.473 00:05:26.473 ' 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.473 --rc genhtml_branch_coverage=1 00:05:26.473 --rc genhtml_function_coverage=1 00:05:26.473 --rc genhtml_legend=1 00:05:26.473 --rc geninfo_all_blocks=1 00:05:26.473 --rc geninfo_unexecuted_blocks=1 00:05:26.473 00:05:26.473 ' 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:26.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.473 --rc genhtml_branch_coverage=1 00:05:26.473 --rc genhtml_function_coverage=1 00:05:26.473 --rc genhtml_legend=1 00:05:26.473 --rc geninfo_all_blocks=1 00:05:26.473 --rc geninfo_unexecuted_blocks=1 00:05:26.473 00:05:26.473 ' 00:05:26.473 03:56:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:26.473 03:56:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:26.473 03:56:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:26.473 03:56:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.473 03:56:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.473 ************************************ 00:05:26.473 START TEST default_locks 00:05:26.473 ************************************ 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59217 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59217 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59217 ']' 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.473 03:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.473 [2024-10-13 03:56:19.578403] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:26.473 [2024-10-13 03:56:19.578881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:05:26.732 [2024-10-13 03:56:19.725702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.732 [2024-10-13 03:56:19.801076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.298 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.298 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:27.298 03:56:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59217 00:05:27.299 03:56:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59217 00:05:27.299 03:56:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59217 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59217 ']' 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59217 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59217 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.559 killing process with pid 59217 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59217' 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59217 00:05:27.559 03:56:20 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59217 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59217 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59217 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59217 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59217 ']' 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.937 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59217) - No such process 00:05:28.937 ERROR: process (pid: 59217) is no longer running 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.937 00:05:28.937 real 0m2.294s 00:05:28.937 user 0m2.311s 00:05:28.937 sys 0m0.420s 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.937 ************************************ 00:05:28.937 END TEST default_locks 00:05:28.937 ************************************ 00:05:28.937 03:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.937 03:56:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:28.937 03:56:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.937 03:56:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.937 03:56:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.937 ************************************ 00:05:28.937 START TEST default_locks_via_rpc 00:05:28.937 ************************************ 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59270 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59270 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59270 ']' 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.937 03:56:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.937 [2024-10-13 03:56:21.919275] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:28.937 [2024-10-13 03:56:21.919390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59270 ] 00:05:28.937 [2024-10-13 03:56:22.067968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.198 [2024-10-13 03:56:22.144377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.765 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.765 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:29.765 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59270 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59270 00:05:29.766 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59270 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59270 ']' 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59270 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59270 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.024 killing process with pid 59270 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59270' 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59270 00:05:30.024 03:56:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59270 00:05:30.960 00:05:30.960 real 0m2.255s 00:05:30.960 user 0m2.265s 00:05:30.960 sys 0m0.424s 00:05:30.960 ************************************ 00:05:30.960 END TEST default_locks_via_rpc 00:05:30.960 ************************************ 00:05:30.960 03:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.960 03:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 03:56:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.218 03:56:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.218 03:56:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.218 03:56:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 ************************************ 00:05:31.218 START TEST non_locking_app_on_locked_coremask 00:05:31.218 ************************************ 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59322 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59322 /var/tmp/spdk.sock 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59322 ']' 00:05:31.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.218 03:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.218 [2024-10-13 03:56:24.212201] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:31.218 [2024-10-13 03:56:24.212288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59322 ] 00:05:31.218 [2024-10-13 03:56:24.353356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.477 [2024-10-13 03:56:24.430060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59338 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59338 /var/tmp/spdk2.sock 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59338 ']' 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.045 03:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.045 [2024-10-13 03:56:25.119029] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:32.045 [2024-10-13 03:56:25.119148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:05:32.305 [2024-10-13 03:56:25.264334] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.305 [2024-10-13 03:56:25.264368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.305 [2024-10-13 03:56:25.421897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.240 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.240 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.240 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59322 00:05:33.240 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59322 00:05:33.240 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59322 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59322 ']' 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59322 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59322 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.844 killing process with pid 59322 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59322' 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59322 00:05:33.844 03:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59322 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59338 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59338 ']' 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59338 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59338 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.377 killing process with pid 59338 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59338' 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59338 00:05:36.377 03:56:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59338 00:05:37.310 00:05:37.310 real 0m6.093s 00:05:37.310 user 0m6.371s 00:05:37.310 sys 0m0.795s 00:05:37.310 03:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.310 ************************************ 00:05:37.310 END TEST non_locking_app_on_locked_coremask 00:05:37.310 ************************************ 00:05:37.310 03:56:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.310 03:56:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.310 03:56:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.310 03:56:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.310 03:56:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.310 ************************************ 00:05:37.310 START TEST locking_app_on_unlocked_coremask 00:05:37.310 ************************************ 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59429 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59429 /var/tmp/spdk.sock 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59429 ']' 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.310 03:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.310 [2024-10-13 03:56:30.346219] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:37.310 [2024-10-13 03:56:30.346314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:05:37.568 [2024-10-13 03:56:30.488696] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.568 [2024-10-13 03:56:30.488746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.568 [2024-10-13 03:56:30.569504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59445 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59445 /var/tmp/spdk2.sock 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59445 ']' 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.133 03:56:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.133 [2024-10-13 03:56:31.260995] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:38.133 [2024-10-13 03:56:31.261116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59445 ] 00:05:38.391 [2024-10-13 03:56:31.408785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.648 [2024-10-13 03:56:31.560986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59445 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59445 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59429 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59429 ']' 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59429 00:05:39.581 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:39.838 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.838 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59429 00:05:39.838 killing process with pid 59429 00:05:39.838 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.838 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.838 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59429' 00:05:39.838 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59429 00:05:39.838 03:56:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59429 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59445 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59445 ']' 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59445 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59445 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59445' 00:05:42.400 killing process with pid 59445 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59445 00:05:42.400 03:56:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59445 00:05:43.335 00:05:43.335 real 0m6.064s 00:05:43.335 user 0m6.329s 00:05:43.336 sys 0m0.793s 00:05:43.336 ************************************ 00:05:43.336 END TEST locking_app_on_unlocked_coremask 00:05:43.336 ************************************ 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.336 03:56:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:43.336 03:56:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.336 03:56:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.336 03:56:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.336 ************************************ 00:05:43.336 START TEST locking_app_on_locked_coremask 00:05:43.336 ************************************ 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59536 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59536 /var/tmp/spdk.sock 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59536 ']' 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.336 03:56:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.336 [2024-10-13 03:56:36.464892] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:43.336 [2024-10-13 03:56:36.465013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:05:43.596 [2024-10-13 03:56:36.611679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.596 [2024-10-13 03:56:36.690399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59552 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59552 /var/tmp/spdk2.sock 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59552 /var/tmp/spdk2.sock 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59552 /var/tmp/spdk2.sock 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59552 ']' 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.165 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.426 [2024-10-13 03:56:37.336097] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:44.426 [2024-10-13 03:56:37.336229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59552 ] 00:05:44.426 [2024-10-13 03:56:37.482242] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59536 has claimed it. 00:05:44.426 [2024-10-13 03:56:37.482308] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.007 ERROR: process (pid: 59552) is no longer running 00:05:45.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59552) - No such process 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59536 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59536 00:05:45.007 03:56:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.007 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59536 00:05:45.007 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59536 ']' 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59536 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59536 00:05:45.008 killing process with pid 59536 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59536' 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59536 00:05:45.008 03:56:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59536 00:05:46.403 00:05:46.403 real 0m2.925s 00:05:46.403 user 0m3.135s 00:05:46.403 sys 0m0.500s 00:05:46.403 03:56:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.403 ************************************ 00:05:46.403 END TEST locking_app_on_locked_coremask 00:05:46.403 ************************************ 00:05:46.403 03:56:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.403 03:56:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:46.403 03:56:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.403 03:56:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.403 03:56:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.403 ************************************ 00:05:46.403 START TEST locking_overlapped_coremask 00:05:46.403 ************************************ 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59605 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59605 /var/tmp/spdk.sock 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59605 ']' 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.403 03:56:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.403 [2024-10-13 03:56:39.449943] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:46.403 [2024-10-13 03:56:39.450078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59605 ] 00:05:46.665 [2024-10-13 03:56:39.604059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.665 [2024-10-13 03:56:39.740482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.665 [2024-10-13 03:56:39.740830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.665 [2024-10-13 03:56:39.740897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59623 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59623 /var/tmp/spdk2.sock 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59623 /var/tmp/spdk2.sock 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59623 /var/tmp/spdk2.sock 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59623 ']' 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.260 03:56:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.260 [2024-10-13 03:56:40.414832] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:47.260 [2024-10-13 03:56:40.415119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59623 ] 00:05:47.522 [2024-10-13 03:56:40.570830] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59605 has claimed it. 00:05:47.522 [2024-10-13 03:56:40.570900] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:48.092 ERROR: process (pid: 59623) is no longer running 00:05:48.092 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59623) - No such process 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59605 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59605 ']' 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59605 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59605 00:05:48.092 killing process with pid 59605 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59605' 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59605 00:05:48.092 03:56:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59605 00:05:49.474 ************************************ 00:05:49.474 END TEST locking_overlapped_coremask 00:05:49.474 ************************************ 00:05:49.474 00:05:49.474 real 0m2.956s 00:05:49.474 user 0m7.955s 00:05:49.474 sys 0m0.461s 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 03:56:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:49.474 03:56:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.474 03:56:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.474 03:56:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 ************************************ 00:05:49.474 START TEST locking_overlapped_coremask_via_rpc 00:05:49.474 ************************************ 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59676 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59676 /var/tmp/spdk.sock 00:05:49.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59676 ']' 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 03:56:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:49.474 [2024-10-13 03:56:42.459336] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:49.474 [2024-10-13 03:56:42.459922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59676 ] 00:05:49.474 [2024-10-13 03:56:42.609744] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.474 [2024-10-13 03:56:42.610005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.733 [2024-10-13 03:56:42.706715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.733 [2024-10-13 03:56:42.706774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.733 [2024-10-13 03:56:42.706808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.302 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.302 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:50.302 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59694 00:05:50.302 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:50.302 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59694 /var/tmp/spdk2.sock 00:05:50.302 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59694 ']' 00:05:50.303 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.303 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.303 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.303 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.303 03:56:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.303 [2024-10-13 03:56:43.368226] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:50.303 [2024-10-13 03:56:43.368508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59694 ] 00:05:50.563 [2024-10-13 03:56:43.514577] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.563 [2024-10-13 03:56:43.516622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.563 [2024-10-13 03:56:43.676506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.563 [2024-10-13 03:56:43.676697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:50.563 [2024-10-13 03:56:43.676749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.505 [2024-10-13 03:56:44.618748] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59676 has claimed it. 00:05:51.505 request: 00:05:51.505 { 00:05:51.505 "method": "framework_enable_cpumask_locks", 00:05:51.505 "req_id": 1 00:05:51.505 } 00:05:51.505 Got JSON-RPC error response 00:05:51.505 response: 00:05:51.505 { 00:05:51.505 "code": -32603, 00:05:51.505 "message": "Failed to claim CPU core: 2" 00:05:51.505 } 00:05:51.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59676 /var/tmp/spdk.sock 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59676 ']' 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.505 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59694 /var/tmp/spdk2.sock 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59694 ']' 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.767 03:56:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.033 ************************************ 00:05:52.033 END TEST locking_overlapped_coremask_via_rpc 00:05:52.033 ************************************ 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.033 00:05:52.033 real 0m2.657s 00:05:52.033 user 0m1.055s 00:05:52.033 sys 0m0.125s 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.033 03:56:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.033 03:56:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:52.033 03:56:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59676 ]] 00:05:52.033 03:56:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59676 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59676 ']' 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59676 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59676 00:05:52.033 killing process with pid 59676 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59676' 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59676 00:05:52.033 03:56:45 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59676 00:05:53.408 03:56:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59694 ]] 00:05:53.408 03:56:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59694 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59694 ']' 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59694 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59694 00:05:53.408 killing process with pid 59694 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59694' 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59694 00:05:53.408 03:56:46 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59694 00:05:54.342 03:56:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.342 Process with pid 59676 is not found 00:05:54.342 03:56:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:54.342 03:56:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59676 ]] 00:05:54.342 03:56:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59676 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59676 ']' 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59676 00:05:54.342 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59676) - No such process 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59676 is not found' 00:05:54.342 03:56:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59694 ]] 00:05:54.342 03:56:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59694 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59694 ']' 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59694 00:05:54.342 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59694) - No such process 00:05:54.342 Process with pid 59694 is not found 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59694 is not found' 00:05:54.342 03:56:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:54.342 00:05:54.342 real 0m28.114s 00:05:54.342 user 0m48.407s 00:05:54.342 sys 0m4.257s 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.342 03:56:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.342 ************************************ 00:05:54.342 END TEST cpu_locks 00:05:54.342 ************************************ 00:05:54.600 00:05:54.600 real 0m53.325s 00:05:54.600 user 1m39.295s 00:05:54.600 sys 0m7.017s 00:05:54.600 03:56:47 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.600 03:56:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.600 ************************************ 00:05:54.600 END TEST event 00:05:54.600 ************************************ 00:05:54.600 03:56:47 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:54.600 03:56:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.600 03:56:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.600 03:56:47 -- common/autotest_common.sh@10 -- # set +x 00:05:54.600 ************************************ 00:05:54.600 START TEST thread 00:05:54.600 ************************************ 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:54.600 * Looking for test storage... 00:05:54.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.600 03:56:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.600 03:56:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.600 03:56:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.600 03:56:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.600 03:56:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.600 03:56:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.600 03:56:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.600 03:56:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.600 03:56:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.600 03:56:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.600 03:56:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.600 03:56:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:54.600 03:56:47 thread -- scripts/common.sh@345 -- # : 1 00:05:54.600 03:56:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.600 03:56:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.600 03:56:47 thread -- scripts/common.sh@365 -- # decimal 1 00:05:54.600 03:56:47 thread -- scripts/common.sh@353 -- # local d=1 00:05:54.600 03:56:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.600 03:56:47 thread -- scripts/common.sh@355 -- # echo 1 00:05:54.600 03:56:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.600 03:56:47 thread -- scripts/common.sh@366 -- # decimal 2 00:05:54.600 03:56:47 thread -- scripts/common.sh@353 -- # local d=2 00:05:54.600 03:56:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.600 03:56:47 thread -- scripts/common.sh@355 -- # echo 2 00:05:54.600 03:56:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.600 03:56:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.600 03:56:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.600 03:56:47 thread -- scripts/common.sh@368 -- # return 0 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.600 --rc genhtml_branch_coverage=1 00:05:54.600 --rc genhtml_function_coverage=1 00:05:54.600 --rc genhtml_legend=1 00:05:54.600 --rc geninfo_all_blocks=1 00:05:54.600 --rc geninfo_unexecuted_blocks=1 00:05:54.600 00:05:54.600 ' 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.600 --rc genhtml_branch_coverage=1 00:05:54.600 --rc genhtml_function_coverage=1 00:05:54.600 --rc genhtml_legend=1 00:05:54.600 --rc geninfo_all_blocks=1 00:05:54.600 --rc geninfo_unexecuted_blocks=1 00:05:54.600 00:05:54.600 ' 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.600 --rc genhtml_branch_coverage=1 00:05:54.600 --rc genhtml_function_coverage=1 00:05:54.600 --rc genhtml_legend=1 00:05:54.600 --rc geninfo_all_blocks=1 00:05:54.600 --rc geninfo_unexecuted_blocks=1 00:05:54.600 00:05:54.600 ' 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.600 --rc genhtml_branch_coverage=1 00:05:54.600 --rc genhtml_function_coverage=1 00:05:54.600 --rc genhtml_legend=1 00:05:54.600 --rc geninfo_all_blocks=1 00:05:54.600 --rc geninfo_unexecuted_blocks=1 00:05:54.600 00:05:54.600 ' 00:05:54.600 03:56:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.600 03:56:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.600 ************************************ 00:05:54.600 START TEST thread_poller_perf 00:05:54.600 ************************************ 00:05:54.600 03:56:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:54.600 [2024-10-13 03:56:47.719041] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:54.600 [2024-10-13 03:56:47.719150] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59843 ] 00:05:54.858 [2024-10-13 03:56:47.868638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.858 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:54.858 [2024-10-13 03:56:47.962808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.229 [2024-10-13T03:56:49.389Z] ====================================== 00:05:56.229 [2024-10-13T03:56:49.389Z] busy:2612334432 (cyc) 00:05:56.229 [2024-10-13T03:56:49.389Z] total_run_count: 306000 00:05:56.229 [2024-10-13T03:56:49.389Z] tsc_hz: 2600000000 (cyc) 00:05:56.229 [2024-10-13T03:56:49.389Z] ====================================== 00:05:56.229 [2024-10-13T03:56:49.389Z] poller_cost: 8537 (cyc), 3283 (nsec) 00:05:56.229 00:05:56.229 real 0m1.432s 00:05:56.229 user 0m1.255s 00:05:56.229 sys 0m0.070s 00:05:56.229 03:56:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.229 03:56:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:56.229 ************************************ 00:05:56.229 END TEST thread_poller_perf 00:05:56.229 ************************************ 00:05:56.229 03:56:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.229 03:56:49 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:56.229 03:56:49 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.229 03:56:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.229 ************************************ 00:05:56.229 START TEST thread_poller_perf 00:05:56.229 ************************************ 00:05:56.229 03:56:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:56.229 [2024-10-13 03:56:49.202882] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:56.229 [2024-10-13 03:56:49.202968] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59880 ] 00:05:56.229 [2024-10-13 03:56:49.346884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.487 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:56.487 [2024-10-13 03:56:49.444815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.891 [2024-10-13T03:56:51.051Z] ====================================== 00:05:57.891 [2024-10-13T03:56:51.051Z] busy:2603551374 (cyc) 00:05:57.891 [2024-10-13T03:56:51.051Z] total_run_count: 3932000 00:05:57.891 [2024-10-13T03:56:51.051Z] tsc_hz: 2600000000 (cyc) 00:05:57.891 [2024-10-13T03:56:51.051Z] ====================================== 00:05:57.891 [2024-10-13T03:56:51.051Z] poller_cost: 662 (cyc), 254 (nsec) 00:05:57.891 00:05:57.891 real 0m1.423s 00:05:57.891 user 0m1.268s 00:05:57.891 sys 0m0.048s 00:05:57.891 03:56:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.891 ************************************ 00:05:57.891 END TEST thread_poller_perf 00:05:57.891 ************************************ 00:05:57.891 03:56:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.891 03:56:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:57.891 00:05:57.891 real 0m3.105s 00:05:57.891 user 0m2.637s 00:05:57.891 sys 0m0.236s 00:05:57.891 03:56:50 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.891 ************************************ 00:05:57.891 END TEST thread 00:05:57.891 03:56:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.891 ************************************ 00:05:57.891 03:56:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:57.891 03:56:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:57.891 03:56:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.891 03:56:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.891 03:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.891 ************************************ 00:05:57.891 START TEST app_cmdline 00:05:57.891 ************************************ 00:05:57.891 03:56:50 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:57.891 * Looking for test storage... 00:05:57.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:57.891 03:56:50 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.891 03:56:50 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.891 03:56:50 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.891 03:56:50 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.891 03:56:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:57.891 03:56:50 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.891 03:56:50 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.891 --rc genhtml_branch_coverage=1 00:05:57.891 --rc genhtml_function_coverage=1 00:05:57.891 --rc genhtml_legend=1 00:05:57.891 --rc geninfo_all_blocks=1 00:05:57.891 --rc geninfo_unexecuted_blocks=1 00:05:57.891 00:05:57.891 ' 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.892 --rc genhtml_branch_coverage=1 00:05:57.892 --rc genhtml_function_coverage=1 00:05:57.892 --rc genhtml_legend=1 00:05:57.892 --rc geninfo_all_blocks=1 00:05:57.892 --rc geninfo_unexecuted_blocks=1 00:05:57.892 00:05:57.892 ' 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.892 --rc genhtml_branch_coverage=1 00:05:57.892 --rc genhtml_function_coverage=1 00:05:57.892 --rc genhtml_legend=1 00:05:57.892 --rc geninfo_all_blocks=1 00:05:57.892 --rc geninfo_unexecuted_blocks=1 00:05:57.892 00:05:57.892 ' 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.892 --rc genhtml_branch_coverage=1 00:05:57.892 --rc genhtml_function_coverage=1 00:05:57.892 --rc genhtml_legend=1 00:05:57.892 --rc geninfo_all_blocks=1 00:05:57.892 --rc geninfo_unexecuted_blocks=1 00:05:57.892 00:05:57.892 ' 00:05:57.892 03:56:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:57.892 03:56:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59963 00:05:57.892 03:56:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59963 00:05:57.892 03:56:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59963 ']' 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.892 03:56:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.892 [2024-10-13 03:56:50.927307] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:05:57.892 [2024-10-13 03:56:50.927462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:05:58.153 [2024-10-13 03:56:51.082439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.153 [2024-10-13 03:56:51.204979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.092 03:56:51 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.092 03:56:51 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:59.092 03:56:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:59.092 { 00:05:59.092 "version": "SPDK v25.01-pre git sha1 bbce7a874", 00:05:59.092 "fields": { 00:05:59.092 "major": 25, 00:05:59.092 "minor": 1, 00:05:59.092 "patch": 0, 00:05:59.092 "suffix": "-pre", 00:05:59.092 "commit": "bbce7a874" 00:05:59.092 } 00:05:59.092 } 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:59.092 03:56:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:59.092 03:56:52 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.353 request: 00:05:59.353 { 00:05:59.353 "method": "env_dpdk_get_mem_stats", 00:05:59.353 "req_id": 1 00:05:59.353 } 00:05:59.353 Got JSON-RPC error response 00:05:59.353 response: 00:05:59.353 { 00:05:59.353 "code": -32601, 00:05:59.353 "message": "Method not found" 00:05:59.353 } 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.353 03:56:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59963 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59963 ']' 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59963 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59963 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59963' 00:05:59.353 killing process with pid 59963 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@969 -- # kill 59963 00:05:59.353 03:56:52 app_cmdline -- common/autotest_common.sh@974 -- # wait 59963 00:06:00.740 00:06:00.740 real 0m3.193s 00:06:00.740 user 0m3.404s 00:06:00.740 sys 0m0.560s 00:06:00.740 03:56:53 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.740 03:56:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.740 ************************************ 00:06:00.740 END TEST app_cmdline 00:06:00.740 ************************************ 00:06:01.002 03:56:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:01.002 03:56:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.002 03:56:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.002 03:56:53 -- common/autotest_common.sh@10 -- # set +x 00:06:01.002 ************************************ 00:06:01.002 START TEST version 00:06:01.002 ************************************ 00:06:01.002 03:56:53 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:01.002 * Looking for test storage... 00:06:01.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:01.002 03:56:53 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.002 03:56:53 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.002 03:56:53 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.002 03:56:54 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.002 03:56:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.002 03:56:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.002 03:56:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.002 03:56:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.002 03:56:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.002 03:56:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.002 03:56:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.002 03:56:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.002 03:56:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.002 03:56:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.002 03:56:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.002 03:56:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:01.002 03:56:54 version -- scripts/common.sh@345 -- # : 1 00:06:01.002 03:56:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.002 03:56:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.002 03:56:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:01.002 03:56:54 version -- scripts/common.sh@353 -- # local d=1 00:06:01.002 03:56:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.002 03:56:54 version -- scripts/common.sh@355 -- # echo 1 00:06:01.002 03:56:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.002 03:56:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:01.002 03:56:54 version -- scripts/common.sh@353 -- # local d=2 00:06:01.002 03:56:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.002 03:56:54 version -- scripts/common.sh@355 -- # echo 2 00:06:01.002 03:56:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.002 03:56:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.002 03:56:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.002 03:56:54 version -- scripts/common.sh@368 -- # return 0 00:06:01.002 03:56:54 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.002 03:56:54 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.002 --rc genhtml_branch_coverage=1 00:06:01.002 --rc genhtml_function_coverage=1 00:06:01.002 --rc genhtml_legend=1 00:06:01.002 --rc geninfo_all_blocks=1 00:06:01.002 --rc geninfo_unexecuted_blocks=1 00:06:01.002 00:06:01.002 ' 00:06:01.002 03:56:54 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.002 --rc genhtml_branch_coverage=1 00:06:01.002 --rc genhtml_function_coverage=1 00:06:01.002 --rc genhtml_legend=1 00:06:01.002 --rc geninfo_all_blocks=1 00:06:01.002 --rc geninfo_unexecuted_blocks=1 00:06:01.002 00:06:01.002 ' 00:06:01.002 03:56:54 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.002 --rc genhtml_branch_coverage=1 00:06:01.002 --rc genhtml_function_coverage=1 00:06:01.002 --rc genhtml_legend=1 00:06:01.002 --rc geninfo_all_blocks=1 00:06:01.002 --rc geninfo_unexecuted_blocks=1 00:06:01.002 00:06:01.002 ' 00:06:01.002 03:56:54 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.002 --rc genhtml_branch_coverage=1 00:06:01.002 --rc genhtml_function_coverage=1 00:06:01.002 --rc genhtml_legend=1 00:06:01.002 --rc geninfo_all_blocks=1 00:06:01.002 --rc geninfo_unexecuted_blocks=1 00:06:01.002 00:06:01.002 ' 00:06:01.002 03:56:54 version -- app/version.sh@17 -- # get_header_version major 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # cut -f2 00:06:01.003 03:56:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.003 03:56:54 version -- app/version.sh@17 -- # major=25 00:06:01.003 03:56:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:01.003 03:56:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # cut -f2 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.003 03:56:54 version -- app/version.sh@18 -- # minor=1 00:06:01.003 03:56:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:01.003 03:56:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # cut -f2 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.003 03:56:54 version -- app/version.sh@19 -- # patch=0 00:06:01.003 03:56:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:01.003 03:56:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # cut -f2 00:06:01.003 03:56:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:01.003 03:56:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:01.003 03:56:54 version -- app/version.sh@22 -- # version=25.1 00:06:01.003 03:56:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:01.003 03:56:54 version -- app/version.sh@28 -- # version=25.1rc0 00:06:01.003 03:56:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:01.003 03:56:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:01.003 03:56:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:01.003 03:56:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:01.003 ************************************ 00:06:01.003 END TEST version 00:06:01.003 ************************************ 00:06:01.003 00:06:01.003 real 0m0.188s 00:06:01.003 user 0m0.121s 00:06:01.003 sys 0m0.096s 00:06:01.003 03:56:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.003 03:56:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:01.003 03:56:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:01.003 03:56:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:01.003 03:56:54 -- spdk/autotest.sh@194 -- # uname -s 00:06:01.265 03:56:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:01.265 03:56:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:01.265 03:56:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:01.265 03:56:54 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:01.265 03:56:54 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:01.265 03:56:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:01.265 03:56:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.265 03:56:54 -- common/autotest_common.sh@10 -- # set +x 00:06:01.265 ************************************ 00:06:01.265 START TEST blockdev_nvme 00:06:01.265 ************************************ 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:01.265 * Looking for test storage... 00:06:01.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.265 03:56:54 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.265 --rc genhtml_branch_coverage=1 00:06:01.265 --rc genhtml_function_coverage=1 00:06:01.265 --rc genhtml_legend=1 00:06:01.265 --rc geninfo_all_blocks=1 00:06:01.265 --rc geninfo_unexecuted_blocks=1 00:06:01.265 00:06:01.265 ' 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.265 --rc genhtml_branch_coverage=1 00:06:01.265 --rc genhtml_function_coverage=1 00:06:01.265 --rc genhtml_legend=1 00:06:01.265 --rc geninfo_all_blocks=1 00:06:01.265 --rc geninfo_unexecuted_blocks=1 00:06:01.265 00:06:01.265 ' 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.265 --rc genhtml_branch_coverage=1 00:06:01.265 --rc genhtml_function_coverage=1 00:06:01.265 --rc genhtml_legend=1 00:06:01.265 --rc geninfo_all_blocks=1 00:06:01.265 --rc geninfo_unexecuted_blocks=1 00:06:01.265 00:06:01.265 ' 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.265 --rc genhtml_branch_coverage=1 00:06:01.265 --rc genhtml_function_coverage=1 00:06:01.265 --rc genhtml_legend=1 00:06:01.265 --rc geninfo_all_blocks=1 00:06:01.265 --rc geninfo_unexecuted_blocks=1 00:06:01.265 00:06:01.265 ' 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:01.265 03:56:54 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60141 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60141 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60141 ']' 00:06:01.265 03:56:54 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.265 03:56:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.265 [2024-10-13 03:56:54.402269] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:01.265 [2024-10-13 03:56:54.402553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60141 ] 00:06:01.526 [2024-10-13 03:56:54.553393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.526 [2024-10-13 03:56:54.633202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.093 03:56:55 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.093 03:56:55 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:02.093 03:56:55 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:02.093 03:56:55 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:02.093 03:56:55 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:02.093 03:56:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:02.093 03:56:55 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:02.351 03:56:55 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:02.351 03:56:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.351 03:56:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:02.609 03:56:55 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:02.609 03:56:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:02.610 03:56:55 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ae94ebeb-4814-4a25-b612-5b31225905d1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ae94ebeb-4814-4a25-b612-5b31225905d1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "7b35de92-1aab-4137-b294-63a83920b55d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7b35de92-1aab-4137-b294-63a83920b55d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9533593a-e82f-454b-a1cb-4d20f4c9df54"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9533593a-e82f-454b-a1cb-4d20f4c9df54",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9040405a-abf2-4ac4-bacf-2a202aa2c150"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9040405a-abf2-4ac4-bacf-2a202aa2c150",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "50b482d8-5d9a-4c57-8448-edc8f74f3a07"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "50b482d8-5d9a-4c57-8448-edc8f74f3a07",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f918d390-6d74-4031-af13-1333a17374f2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f918d390-6d74-4031-af13-1333a17374f2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:02.610 03:56:55 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:02.610 03:56:55 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:02.610 03:56:55 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:02.610 03:56:55 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60141 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60141 ']' 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60141 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60141 00:06:02.610 killing process with pid 60141 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60141' 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60141 00:06:02.610 03:56:55 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60141 00:06:03.986 03:56:56 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:03.986 03:56:56 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:03.986 03:56:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:03.986 03:56:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.986 03:56:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.986 ************************************ 00:06:03.986 START TEST bdev_hello_world 00:06:03.986 ************************************ 00:06:03.986 03:56:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:03.986 [2024-10-13 03:56:56.951247] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:03.986 [2024-10-13 03:56:56.951510] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60219 ] 00:06:03.986 [2024-10-13 03:56:57.104155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.245 [2024-10-13 03:56:57.221754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.811 [2024-10-13 03:56:57.761289] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:04.811 [2024-10-13 03:56:57.761445] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:04.811 [2024-10-13 03:56:57.761471] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:04.811 [2024-10-13 03:56:57.763852] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:04.811 [2024-10-13 03:56:57.764534] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:04.811 [2024-10-13 03:56:57.764646] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:04.811 [2024-10-13 03:56:57.764906] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:04.811 00:06:04.811 [2024-10-13 03:56:57.764926] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:05.376 00:06:05.376 real 0m1.586s 00:06:05.376 user 0m1.305s 00:06:05.376 sys 0m0.171s 00:06:05.376 ************************************ 00:06:05.376 END TEST bdev_hello_world 00:06:05.376 ************************************ 00:06:05.376 03:56:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.376 03:56:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:05.376 03:56:58 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:05.376 03:56:58 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:05.376 03:56:58 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.376 03:56:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.376 ************************************ 00:06:05.376 START TEST bdev_bounds 00:06:05.376 ************************************ 00:06:05.376 Process bdevio pid: 60256 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60256 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60256' 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60256 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60256 ']' 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:05.376 03:56:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:05.634 [2024-10-13 03:56:58.594259] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:05.634 [2024-10-13 03:56:58.594380] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60256 ] 00:06:05.634 [2024-10-13 03:56:58.740359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.892 [2024-10-13 03:56:58.841031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.892 [2024-10-13 03:56:58.841521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.892 [2024-10-13 03:56:58.841696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.457 03:56:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.457 03:56:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:06.457 03:56:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:06.457 I/O targets: 00:06:06.457 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:06.457 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:06.457 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.457 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.457 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:06.457 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:06.457 00:06:06.457 00:06:06.457 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.457 http://cunit.sourceforge.net/ 00:06:06.457 00:06:06.457 00:06:06.457 Suite: bdevio tests on: Nvme3n1 00:06:06.457 Test: blockdev write read block ...passed 00:06:06.457 Test: blockdev write zeroes read block ...passed 00:06:06.457 Test: blockdev write zeroes read no split ...passed 00:06:06.457 Test: blockdev write zeroes read split ...passed 00:06:06.457 Test: blockdev write zeroes read split partial ...passed 00:06:06.457 Test: blockdev reset ...[2024-10-13 03:56:59.586075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:06.457 passed 00:06:06.457 Test: blockdev write read 8 blocks ...[2024-10-13 03:56:59.589754] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:06.457 passed 00:06:06.457 Test: blockdev write read size > 128k ...passed 00:06:06.457 Test: blockdev write read invalid size ...passed 00:06:06.457 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.457 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.457 Test: blockdev write read max offset ...passed 00:06:06.457 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.457 Test: blockdev writev readv 8 blocks ...passed 00:06:06.457 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.457 Test: blockdev writev readv block ...passed 00:06:06.457 Test: blockdev writev readv size > 128k ...passed 00:06:06.457 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.457 Test: blockdev comparev and writev ...[2024-10-13 03:56:59.608582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b320a000 len:0x1000 00:06:06.457 passed 00:06:06.457 Test: blockdev nvme passthru rw ...passed 00:06:06.457 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:56:59.608962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.457 [2024-10-13 03:56:59.609635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.457 [2024-10-13 03:56:59.609716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.457 passed 00:06:06.715 Test: blockdev nvme admin passthru ...passed 00:06:06.715 Test: blockdev copy ...passed 00:06:06.715 Suite: bdevio tests on: Nvme2n3 00:06:06.715 Test: blockdev write read block ...passed 00:06:06.715 Test: blockdev write zeroes read block ...passed 00:06:06.715 Test: blockdev write zeroes read no split ...passed 00:06:06.715 Test: blockdev write zeroes read split ...passed 00:06:06.715 Test: blockdev write zeroes read split partial ...passed 00:06:06.715 Test: blockdev reset ...[2024-10-13 03:56:59.668204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:06.715 [2024-10-13 03:56:59.671780] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:06.715 passed 00:06:06.715 Test: blockdev write read 8 blocks ...passed 00:06:06.715 Test: blockdev write read size > 128k ...passed 00:06:06.715 Test: blockdev write read invalid size ...passed 00:06:06.715 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.715 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.715 Test: blockdev write read max offset ...passed 00:06:06.715 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.715 Test: blockdev writev readv 8 blocks ...passed 00:06:06.715 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.715 Test: blockdev writev readv block ...passed 00:06:06.715 Test: blockdev writev readv size > 128k ...passed 00:06:06.715 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.715 Test: blockdev comparev and writev ...[2024-10-13 03:56:59.688001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292806000 len:0x1000 00:06:06.715 [2024-10-13 03:56:59.688202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.715 passed 00:06:06.715 Test: blockdev nvme passthru rw ...passed 00:06:06.716 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:56:59.690765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.716 passed 00:06:06.716 Test: blockdev nvme admin passthru ...[2024-10-13 03:56:59.690857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.716 passed 00:06:06.716 Test: blockdev copy ...passed 00:06:06.716 Suite: bdevio tests on: Nvme2n2 00:06:06.716 Test: blockdev write read block ...passed 00:06:06.716 Test: blockdev write zeroes read block ...passed 00:06:06.716 Test: blockdev write zeroes read no split ...passed 00:06:06.716 Test: blockdev write zeroes read split ...passed 00:06:06.716 Test: blockdev write zeroes read split partial ...passed 00:06:06.716 Test: blockdev reset ...[2024-10-13 03:56:59.748853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:06.716 [2024-10-13 03:56:59.752260] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:06.716 passed 00:06:06.716 Test: blockdev write read 8 blocks ...passed 00:06:06.716 Test: blockdev write read size > 128k ...passed 00:06:06.716 Test: blockdev write read invalid size ...passed 00:06:06.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.716 Test: blockdev write read max offset ...passed 00:06:06.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.716 Test: blockdev writev readv 8 blocks ...passed 00:06:06.716 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.716 Test: blockdev writev readv block ...passed 00:06:06.716 Test: blockdev writev readv size > 128k ...passed 00:06:06.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.716 Test: blockdev comparev and writev ...[2024-10-13 03:56:59.772026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cea3c000 len:0x1000 00:06:06.716 [2024-10-13 03:56:59.772250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.716 passed 00:06:06.716 Test: blockdev nvme passthru rw ...passed 00:06:06.716 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:56:59.776588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.716 [2024-10-13 03:56:59.776774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.716 passed 00:06:06.716 Test: blockdev nvme admin passthru ...passed 00:06:06.716 Test: blockdev copy ...passed 00:06:06.716 Suite: bdevio tests on: Nvme2n1 00:06:06.716 Test: blockdev write read block ...passed 00:06:06.716 Test: blockdev write zeroes read block ...passed 00:06:06.716 Test: blockdev write zeroes read no split ...passed 00:06:06.716 Test: blockdev write zeroes read split ...passed 00:06:06.716 Test: blockdev write zeroes read split partial ...passed 00:06:06.716 Test: blockdev reset ...[2024-10-13 03:56:59.835540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:06.716 [2024-10-13 03:56:59.839772] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:06.716 passed 00:06:06.716 Test: blockdev write read 8 blocks ...passed 00:06:06.716 Test: blockdev write read size > 128k ...passed 00:06:06.716 Test: blockdev write read invalid size ...passed 00:06:06.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:06.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:06.716 Test: blockdev write read max offset ...passed 00:06:06.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:06.716 Test: blockdev writev readv 8 blocks ...passed 00:06:06.716 Test: blockdev writev readv 30 x 1block ...passed 00:06:06.716 Test: blockdev writev readv block ...passed 00:06:06.716 Test: blockdev writev readv size > 128k ...passed 00:06:06.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:06.716 Test: blockdev comparev and writev ...[2024-10-13 03:56:59.853392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cea38000 len:0x1000 00:06:06.716 [2024-10-13 03:56:59.853439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:06.716 passed 00:06:06.716 Test: blockdev nvme passthru rw ...passed 00:06:06.716 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:56:59.854819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:06.716 [2024-10-13 03:56:59.854844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:06.716 passed 00:06:06.716 Test: blockdev nvme admin passthru ...passed 00:06:06.716 Test: blockdev copy ...passed 00:06:06.716 Suite: bdevio tests on: Nvme1n1 00:06:06.716 Test: blockdev write read block ...passed 00:06:06.716 Test: blockdev write zeroes read block ...passed 00:06:06.716 Test: blockdev write zeroes read no split ...passed 00:06:07.017 Test: blockdev write zeroes read split ...passed 00:06:07.017 Test: blockdev write zeroes read split partial ...passed 00:06:07.017 Test: blockdev reset ...[2024-10-13 03:56:59.900154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:07.017 [2024-10-13 03:56:59.903791] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:07.017 passed 00:06:07.017 Test: blockdev write read 8 blocks ...passed 00:06:07.017 Test: blockdev write read size > 128k ...passed 00:06:07.017 Test: blockdev write read invalid size ...passed 00:06:07.017 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:07.017 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:07.017 Test: blockdev write read max offset ...passed 00:06:07.017 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:07.017 Test: blockdev writev readv 8 blocks ...passed 00:06:07.017 Test: blockdev writev readv 30 x 1block ...passed 00:06:07.017 Test: blockdev writev readv block ...passed 00:06:07.017 Test: blockdev writev readv size > 128k ...passed 00:06:07.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:07.017 Test: blockdev comparev and writev ...[2024-10-13 03:56:59.921035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cea34000 len:0x1000 00:06:07.017 [2024-10-13 03:56:59.921182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:07.017 passed 00:06:07.017 Test: blockdev nvme passthru rw ...passed 00:06:07.017 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:56:59.923602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:07.017 [2024-10-13 03:56:59.923721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:07.017 passed 00:06:07.017 Test: blockdev nvme admin passthru ...passed 00:06:07.017 Test: blockdev copy ...passed 00:06:07.017 Suite: bdevio tests on: Nvme0n1 00:06:07.017 Test: blockdev write read block ...passed 00:06:07.017 Test: blockdev write zeroes read block ...passed 00:06:07.017 Test: blockdev write zeroes read no split ...passed 00:06:07.017 Test: blockdev write zeroes read split ...passed 00:06:07.017 Test: blockdev write zeroes read split partial ...passed 00:06:07.017 Test: blockdev reset ...[2024-10-13 03:56:59.977229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:07.017 [2024-10-13 03:56:59.980957] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:07.017 passed 00:06:07.017 Test: blockdev write read 8 blocks ...passed 00:06:07.017 Test: blockdev write read size > 128k ...passed 00:06:07.017 Test: blockdev write read invalid size ...passed 00:06:07.017 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:07.017 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:07.017 Test: blockdev write read max offset ...passed 00:06:07.017 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:07.017 Test: blockdev writev readv 8 blocks ...passed 00:06:07.017 Test: blockdev writev readv 30 x 1block ...passed 00:06:07.017 Test: blockdev writev readv block ...passed 00:06:07.017 Test: blockdev writev readv size > 128k ...passed 00:06:07.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:07.017 Test: blockdev comparev and writev ...passed 00:06:07.017 Test: blockdev nvme passthru rw ...[2024-10-13 03:56:59.987943] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:07.017 separate metadata which is not supported yet. 00:06:07.017 passed 00:06:07.017 Test: blockdev nvme passthru vendor specific ...passed 00:06:07.018 Test: blockdev nvme admin passthru ...[2024-10-13 03:56:59.988332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:07.018 [2024-10-13 03:56:59.988367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:07.018 passed 00:06:07.018 Test: blockdev copy ...passed 00:06:07.018 00:06:07.018 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.018 suites 6 6 n/a 0 0 00:06:07.018 tests 138 138 138 0 0 00:06:07.018 asserts 893 893 893 0 n/a 00:06:07.018 00:06:07.018 Elapsed time = 1.178 seconds 00:06:07.018 0 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60256 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60256 ']' 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60256 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60256 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60256' 00:06:07.018 killing process with pid 60256 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60256 00:06:07.018 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60256 00:06:07.607 03:57:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:07.607 00:06:07.607 real 0m2.188s 00:06:07.607 user 0m5.579s 00:06:07.607 sys 0m0.267s 00:06:07.607 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.607 03:57:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:07.607 ************************************ 00:06:07.607 END TEST bdev_bounds 00:06:07.607 ************************************ 00:06:07.865 03:57:00 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:07.865 03:57:00 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:07.865 03:57:00 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.865 03:57:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.865 ************************************ 00:06:07.865 START TEST bdev_nbd 00:06:07.865 ************************************ 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:07.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60310 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60310 /var/tmp/spdk-nbd.sock 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60310 ']' 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:07.865 03:57:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:07.865 [2024-10-13 03:57:00.852230] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:07.865 [2024-10-13 03:57:00.852457] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.865 [2024-10-13 03:57:01.000926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.123 [2024-10-13 03:57:01.101715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.690 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.690 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:08.690 03:57:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:08.691 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:08.949 1+0 records in 00:06:08.949 1+0 records out 00:06:08.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866874 s, 4.7 MB/s 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:08.949 03:57:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.209 1+0 records in 00:06:09.209 1+0 records out 00:06:09.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424729 s, 9.6 MB/s 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.209 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.467 1+0 records in 00:06:09.467 1+0 records out 00:06:09.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102313 s, 4.0 MB/s 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.467 1+0 records in 00:06:09.467 1+0 records out 00:06:09.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559899 s, 7.3 MB/s 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:09.467 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.468 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.468 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:09.468 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.468 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.468 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.726 1+0 records in 00:06:09.726 1+0 records out 00:06:09.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106859 s, 3.8 MB/s 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.726 03:57:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.984 1+0 records in 00:06:09.984 1+0 records out 00:06:09.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151733 s, 2.7 MB/s 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:09.984 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd0", 00:06:10.242 "bdev_name": "Nvme0n1" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd1", 00:06:10.242 "bdev_name": "Nvme1n1" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd2", 00:06:10.242 "bdev_name": "Nvme2n1" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd3", 00:06:10.242 "bdev_name": "Nvme2n2" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd4", 00:06:10.242 "bdev_name": "Nvme2n3" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd5", 00:06:10.242 "bdev_name": "Nvme3n1" 00:06:10.242 } 00:06:10.242 ]' 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd0", 00:06:10.242 "bdev_name": "Nvme0n1" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd1", 00:06:10.242 "bdev_name": "Nvme1n1" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd2", 00:06:10.242 "bdev_name": "Nvme2n1" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd3", 00:06:10.242 "bdev_name": "Nvme2n2" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd4", 00:06:10.242 "bdev_name": "Nvme2n3" 00:06:10.242 }, 00:06:10.242 { 00:06:10.242 "nbd_device": "/dev/nbd5", 00:06:10.242 "bdev_name": "Nvme3n1" 00:06:10.242 } 00:06:10.242 ]' 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.242 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.500 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.760 03:57:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:11.019 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:11.019 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:11.019 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:11.019 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.019 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.019 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:11.019 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.020 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.020 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.020 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.277 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.535 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.851 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:11.852 /dev/nbd0 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.852 1+0 records in 00:06:11.852 1+0 records out 00:06:11.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825595 s, 5.0 MB/s 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:11.852 03:57:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:12.110 /dev/nbd1 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.110 1+0 records in 00:06:12.110 1+0 records out 00:06:12.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706552 s, 5.8 MB/s 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.110 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:12.370 /dev/nbd10 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.370 1+0 records in 00:06:12.370 1+0 records out 00:06:12.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423404 s, 9.7 MB/s 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.370 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:12.631 /dev/nbd11 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.631 1+0 records in 00:06:12.631 1+0 records out 00:06:12.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362181 s, 11.3 MB/s 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.631 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:12.894 /dev/nbd12 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.894 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.894 1+0 records in 00:06:12.894 1+0 records out 00:06:12.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399609 s, 10.3 MB/s 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:12.895 03:57:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:13.155 /dev/nbd13 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:13.155 1+0 records in 00:06:13.155 1+0 records out 00:06:13.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645933 s, 6.3 MB/s 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.155 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.416 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.416 { 00:06:13.416 "nbd_device": "/dev/nbd0", 00:06:13.416 "bdev_name": "Nvme0n1" 00:06:13.416 }, 00:06:13.416 { 00:06:13.416 "nbd_device": "/dev/nbd1", 00:06:13.416 "bdev_name": "Nvme1n1" 00:06:13.416 }, 00:06:13.416 { 00:06:13.416 "nbd_device": "/dev/nbd10", 00:06:13.416 "bdev_name": "Nvme2n1" 00:06:13.416 }, 00:06:13.416 { 00:06:13.416 "nbd_device": "/dev/nbd11", 00:06:13.416 "bdev_name": "Nvme2n2" 00:06:13.416 }, 00:06:13.416 { 00:06:13.416 "nbd_device": "/dev/nbd12", 00:06:13.416 "bdev_name": "Nvme2n3" 00:06:13.416 }, 00:06:13.416 { 00:06:13.416 "nbd_device": "/dev/nbd13", 00:06:13.416 "bdev_name": "Nvme3n1" 00:06:13.416 } 00:06:13.416 ]' 00:06:13.416 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.416 { 00:06:13.416 "nbd_device": "/dev/nbd0", 00:06:13.416 "bdev_name": "Nvme0n1" 00:06:13.416 }, 00:06:13.416 { 00:06:13.417 "nbd_device": "/dev/nbd1", 00:06:13.417 "bdev_name": "Nvme1n1" 00:06:13.417 }, 00:06:13.417 { 00:06:13.417 "nbd_device": "/dev/nbd10", 00:06:13.417 "bdev_name": "Nvme2n1" 00:06:13.417 }, 00:06:13.417 { 00:06:13.417 "nbd_device": "/dev/nbd11", 00:06:13.417 "bdev_name": "Nvme2n2" 00:06:13.417 }, 00:06:13.417 { 00:06:13.417 "nbd_device": "/dev/nbd12", 00:06:13.417 "bdev_name": "Nvme2n3" 00:06:13.417 }, 00:06:13.417 { 00:06:13.417 "nbd_device": "/dev/nbd13", 00:06:13.417 "bdev_name": "Nvme3n1" 00:06:13.417 } 00:06:13.417 ]' 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.417 /dev/nbd1 00:06:13.417 /dev/nbd10 00:06:13.417 /dev/nbd11 00:06:13.417 /dev/nbd12 00:06:13.417 /dev/nbd13' 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.417 /dev/nbd1 00:06:13.417 /dev/nbd10 00:06:13.417 /dev/nbd11 00:06:13.417 /dev/nbd12 00:06:13.417 /dev/nbd13' 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:13.417 256+0 records in 00:06:13.417 256+0 records out 00:06:13.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669643 s, 157 MB/s 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.417 256+0 records in 00:06:13.417 256+0 records out 00:06:13.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0555152 s, 18.9 MB/s 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.417 256+0 records in 00:06:13.417 256+0 records out 00:06:13.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0499155 s, 21.0 MB/s 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:13.417 256+0 records in 00:06:13.417 256+0 records out 00:06:13.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0541155 s, 19.4 MB/s 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.417 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:13.678 256+0 records in 00:06:13.678 256+0 records out 00:06:13.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0492663 s, 21.3 MB/s 00:06:13.678 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.678 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:13.678 256+0 records in 00:06:13.678 256+0 records out 00:06:13.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0486996 s, 21.5 MB/s 00:06:13.678 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.678 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:13.939 256+0 records in 00:06:13.939 256+0 records out 00:06:13.939 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224926 s, 4.7 MB/s 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.939 03:57:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.201 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.462 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.725 03:57:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.990 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.252 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:15.576 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:15.576 malloc_lvol_verify 00:06:15.837 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:15.837 2af553df-0190-4051-9c20-e61289da95ef 00:06:15.837 03:57:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:16.098 4f9f24e0-8b2d-4300-9ac8-a254a7982191 00:06:16.098 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:16.391 /dev/nbd0 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:16.391 mke2fs 1.47.0 (5-Feb-2023) 00:06:16.391 Discarding device blocks: 0/4096 done 00:06:16.391 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:16.391 00:06:16.391 Allocating group tables: 0/1 done 00:06:16.391 Writing inode tables: 0/1 done 00:06:16.391 Creating journal (1024 blocks): done 00:06:16.391 Writing superblocks and filesystem accounting information: 0/1 done 00:06:16.391 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.391 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60310 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60310 ']' 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60310 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60310 00:06:16.653 killing process with pid 60310 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60310' 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60310 00:06:16.653 03:57:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60310 00:06:17.597 ************************************ 00:06:17.597 END TEST bdev_nbd 00:06:17.597 ************************************ 00:06:17.597 03:57:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:17.597 00:06:17.597 real 0m9.745s 00:06:17.597 user 0m13.897s 00:06:17.597 sys 0m3.046s 00:06:17.597 03:57:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.597 03:57:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:17.597 03:57:10 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:17.597 03:57:10 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:17.597 skipping fio tests on NVMe due to multi-ns failures. 00:06:17.597 03:57:10 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:17.597 03:57:10 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:17.597 03:57:10 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:17.597 03:57:10 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:17.597 03:57:10 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.597 03:57:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.597 ************************************ 00:06:17.597 START TEST bdev_verify 00:06:17.597 ************************************ 00:06:17.597 03:57:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:17.597 [2024-10-13 03:57:10.668366] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:17.597 [2024-10-13 03:57:10.668507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60689 ] 00:06:17.858 [2024-10-13 03:57:10.821748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.858 [2024-10-13 03:57:10.949539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.858 [2024-10-13 03:57:10.949667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.430 Running I/O for 5 seconds... 00:06:20.744 20736.00 IOPS, 81.00 MiB/s [2024-10-13T03:57:14.843Z] 20992.00 IOPS, 82.00 MiB/s [2024-10-13T03:57:15.851Z] 20586.67 IOPS, 80.42 MiB/s [2024-10-13T03:57:16.791Z] 20256.00 IOPS, 79.12 MiB/s [2024-10-13T03:57:16.791Z] 20300.80 IOPS, 79.30 MiB/s 00:06:23.631 Latency(us) 00:06:23.631 [2024-10-13T03:57:16.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:23.631 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x0 length 0xbd0bd 00:06:23.631 Nvme0n1 : 5.05 1648.03 6.44 0.00 0.00 77412.97 14216.27 93565.24 00:06:23.631 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:23.631 Nvme0n1 : 5.07 1666.19 6.51 0.00 0.00 76548.56 14518.74 94371.84 00:06:23.631 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x0 length 0xa0000 00:06:23.631 Nvme1n1 : 5.05 1647.55 6.44 0.00 0.00 77344.30 17442.66 87515.77 00:06:23.631 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0xa0000 length 0xa0000 00:06:23.631 Nvme1n1 : 5.07 1665.73 6.51 0.00 0.00 76257.76 16031.11 77030.01 00:06:23.631 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x0 length 0x80000 00:06:23.631 Nvme2n1 : 5.10 1657.46 6.47 0.00 0.00 76523.73 14720.39 70577.23 00:06:23.631 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x80000 length 0x80000 00:06:23.631 Nvme2n1 : 5.09 1671.98 6.53 0.00 0.00 75648.04 7360.20 64527.75 00:06:23.631 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x0 length 0x80000 00:06:23.631 Nvme2n2 : 5.10 1657.01 6.47 0.00 0.00 76444.52 14922.04 71787.13 00:06:23.631 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x80000 length 0x80000 00:06:23.631 Nvme2n2 : 5.09 1671.54 6.53 0.00 0.00 75505.48 6856.07 62511.26 00:06:23.631 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x0 length 0x80000 00:06:23.631 Nvme2n3 : 5.10 1656.57 6.47 0.00 0.00 76356.97 15224.52 73803.62 00:06:23.631 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x80000 length 0x80000 00:06:23.631 Nvme2n3 : 5.10 1680.71 6.57 0.00 0.00 75106.55 8015.56 60494.77 00:06:23.631 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x0 length 0x20000 00:06:23.631 Nvme3n1 : 5.10 1656.08 6.47 0.00 0.00 76259.73 13006.38 73803.62 00:06:23.631 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:23.631 Verification LBA range: start 0x20000 length 0x20000 00:06:23.631 Nvme3n1 : 5.10 1680.27 6.56 0.00 0.00 75012.00 7965.14 64527.75 00:06:23.631 [2024-10-13T03:57:16.792Z] =================================================================================================================== 00:06:23.632 [2024-10-13T03:57:16.792Z] Total : 19959.10 77.97 0.00 0.00 76195.08 6856.07 94371.84 00:06:25.023 00:06:25.023 real 0m7.406s 00:06:25.023 user 0m13.719s 00:06:25.023 sys 0m0.311s 00:06:25.023 03:57:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.023 ************************************ 00:06:25.023 03:57:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:25.023 END TEST bdev_verify 00:06:25.023 ************************************ 00:06:25.023 03:57:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:25.023 03:57:18 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:25.023 03:57:18 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.023 03:57:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.023 ************************************ 00:06:25.023 START TEST bdev_verify_big_io 00:06:25.023 ************************************ 00:06:25.023 03:57:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:25.023 [2024-10-13 03:57:18.155514] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:25.023 [2024-10-13 03:57:18.155672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60787 ] 00:06:25.389 [2024-10-13 03:57:18.311094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.389 [2024-10-13 03:57:18.440834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.389 [2024-10-13 03:57:18.441058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.332 Running I/O for 5 seconds... 00:06:29.807 660.00 IOPS, 41.25 MiB/s [2024-10-13T03:57:23.906Z] 1382.50 IOPS, 86.41 MiB/s [2024-10-13T03:57:24.849Z] 1433.33 IOPS, 89.58 MiB/s [2024-10-13T03:57:25.110Z] 1557.25 IOPS, 97.33 MiB/s [2024-10-13T03:57:25.110Z] 1755.20 IOPS, 109.70 MiB/s 00:06:31.950 Latency(us) 00:06:31.950 [2024-10-13T03:57:25.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.950 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x0 length 0xbd0b 00:06:31.950 Nvme0n1 : 5.62 116.71 7.29 0.00 0.00 1042926.44 29440.79 1000180.18 00:06:31.950 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:31.950 Nvme0n1 : 5.59 125.86 7.87 0.00 0.00 981030.81 17039.36 1013085.74 00:06:31.950 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x0 length 0xa000 00:06:31.950 Nvme1n1 : 5.68 123.88 7.74 0.00 0.00 976998.26 54848.59 896935.78 00:06:31.950 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0xa000 length 0xa000 00:06:31.950 Nvme1n1 : 5.75 120.33 7.52 0.00 0.00 991048.56 79046.50 1535760.54 00:06:31.950 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x0 length 0x8000 00:06:31.950 Nvme2n1 : 5.69 123.82 7.74 0.00 0.00 947464.38 54848.59 922746.88 00:06:31.950 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x8000 length 0x8000 00:06:31.950 Nvme2n1 : 5.75 124.27 7.77 0.00 0.00 938512.75 75013.51 1555118.87 00:06:31.950 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x0 length 0x8000 00:06:31.950 Nvme2n2 : 5.78 128.22 8.01 0.00 0.00 888444.47 56058.49 955010.76 00:06:31.950 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x8000 length 0x8000 00:06:31.950 Nvme2n2 : 5.81 128.51 8.03 0.00 0.00 881244.66 56058.49 1593835.52 00:06:31.950 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x0 length 0x8000 00:06:31.950 Nvme2n3 : 5.82 135.90 8.49 0.00 0.00 817093.52 38111.70 1148594.02 00:06:31.950 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x8000 length 0x8000 00:06:31.950 Nvme2n3 : 5.86 139.25 8.70 0.00 0.00 793835.01 18551.73 1619646.62 00:06:31.950 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x0 length 0x2000 00:06:31.950 Nvme3n1 : 5.86 152.87 9.55 0.00 0.00 706710.21 2243.35 974369.08 00:06:31.950 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.950 Verification LBA range: start 0x2000 length 0x2000 00:06:31.950 Nvme3n1 : 5.90 161.04 10.07 0.00 0.00 665666.14 1014.55 1664816.05 00:06:31.950 [2024-10-13T03:57:25.110Z] =================================================================================================================== 00:06:31.950 [2024-10-13T03:57:25.110Z] Total : 1580.66 98.79 0.00 0.00 873495.17 1014.55 1664816.05 00:06:33.856 00:06:33.856 real 0m8.640s 00:06:33.856 user 0m16.211s 00:06:33.856 sys 0m0.311s 00:06:33.856 03:57:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.856 ************************************ 00:06:33.856 03:57:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:33.856 END TEST bdev_verify_big_io 00:06:33.856 ************************************ 00:06:33.856 03:57:26 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.856 03:57:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:33.856 03:57:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.856 03:57:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.856 ************************************ 00:06:33.856 START TEST bdev_write_zeroes 00:06:33.856 ************************************ 00:06:33.856 03:57:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.856 [2024-10-13 03:57:26.842166] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:33.856 [2024-10-13 03:57:26.842281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60896 ] 00:06:33.856 [2024-10-13 03:57:26.992409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.115 [2024-10-13 03:57:27.091876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.683 Running I/O for 1 seconds... 00:06:35.625 54144.00 IOPS, 211.50 MiB/s 00:06:35.625 Latency(us) 00:06:35.625 [2024-10-13T03:57:28.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.625 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.625 Nvme0n1 : 1.02 9012.22 35.20 0.00 0.00 14173.45 4763.96 30247.38 00:06:35.625 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.625 Nvme1n1 : 1.02 9000.90 35.16 0.00 0.00 14175.05 9427.10 23592.96 00:06:35.625 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.625 Nvme2n1 : 1.03 8989.96 35.12 0.00 0.00 14146.06 9427.10 21979.77 00:06:35.625 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.625 Nvme2n2 : 1.03 8979.10 35.07 0.00 0.00 14084.84 9477.51 20568.22 00:06:35.625 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.625 Nvme2n3 : 1.03 8968.18 35.03 0.00 0.00 14081.73 9527.93 22181.42 00:06:35.625 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.625 Nvme3n1 : 1.03 8957.37 34.99 0.00 0.00 14047.59 7461.02 23592.96 00:06:35.625 [2024-10-13T03:57:28.785Z] =================================================================================================================== 00:06:35.625 [2024-10-13T03:57:28.785Z] Total : 53907.73 210.58 0.00 0.00 14118.12 4763.96 30247.38 00:06:36.572 00:06:36.572 real 0m2.732s 00:06:36.572 user 0m2.415s 00:06:36.572 sys 0m0.197s 00:06:36.572 03:57:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.572 ************************************ 00:06:36.572 03:57:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:36.572 END TEST bdev_write_zeroes 00:06:36.572 ************************************ 00:06:36.572 03:57:29 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.572 03:57:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:36.572 03:57:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.572 03:57:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.572 ************************************ 00:06:36.573 START TEST bdev_json_nonenclosed 00:06:36.573 ************************************ 00:06:36.573 03:57:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.573 [2024-10-13 03:57:29.649436] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:36.573 [2024-10-13 03:57:29.649574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60949 ] 00:06:36.834 [2024-10-13 03:57:29.801747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.834 [2024-10-13 03:57:29.926187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.834 [2024-10-13 03:57:29.926295] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:36.834 [2024-10-13 03:57:29.926315] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:36.834 [2024-10-13 03:57:29.926326] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.095 00:06:37.095 real 0m0.544s 00:06:37.095 user 0m0.323s 00:06:37.095 sys 0m0.115s 00:06:37.095 03:57:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.095 03:57:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:37.095 ************************************ 00:06:37.095 END TEST bdev_json_nonenclosed 00:06:37.095 ************************************ 00:06:37.095 03:57:30 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.095 03:57:30 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:37.095 03:57:30 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.095 03:57:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.095 ************************************ 00:06:37.095 START TEST bdev_json_nonarray 00:06:37.095 ************************************ 00:06:37.096 03:57:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.357 [2024-10-13 03:57:30.259162] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:37.357 [2024-10-13 03:57:30.259318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:06:37.357 [2024-10-13 03:57:30.414738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.619 [2024-10-13 03:57:30.545896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.619 [2024-10-13 03:57:30.546010] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:37.619 [2024-10-13 03:57:30.546030] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:37.619 [2024-10-13 03:57:30.546041] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.619 00:06:37.619 real 0m0.552s 00:06:37.619 user 0m0.338s 00:06:37.619 sys 0m0.107s 00:06:37.619 03:57:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.619 03:57:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:37.619 ************************************ 00:06:37.619 END TEST bdev_json_nonarray 00:06:37.619 ************************************ 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:37.880 03:57:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:37.880 00:06:37.880 real 0m36.641s 00:06:37.880 user 0m56.623s 00:06:37.880 sys 0m5.242s 00:06:37.880 ************************************ 00:06:37.880 03:57:30 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.880 03:57:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.880 END TEST blockdev_nvme 00:06:37.880 ************************************ 00:06:37.880 03:57:30 -- spdk/autotest.sh@209 -- # uname -s 00:06:37.880 03:57:30 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:37.880 03:57:30 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:37.880 03:57:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.880 03:57:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.880 03:57:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.880 ************************************ 00:06:37.880 START TEST blockdev_nvme_gpt 00:06:37.880 ************************************ 00:06:37.880 03:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:37.880 * Looking for test storage... 00:06:37.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:37.880 03:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:37.880 03:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:06:37.880 03:57:30 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.880 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.880 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.141 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.141 03:57:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.141 --rc genhtml_branch_coverage=1 00:06:38.141 --rc genhtml_function_coverage=1 00:06:38.141 --rc genhtml_legend=1 00:06:38.141 --rc geninfo_all_blocks=1 00:06:38.141 --rc geninfo_unexecuted_blocks=1 00:06:38.141 00:06:38.141 ' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.141 --rc genhtml_branch_coverage=1 00:06:38.141 --rc genhtml_function_coverage=1 00:06:38.141 --rc genhtml_legend=1 00:06:38.141 --rc geninfo_all_blocks=1 00:06:38.141 --rc geninfo_unexecuted_blocks=1 00:06:38.141 00:06:38.141 ' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.141 --rc genhtml_branch_coverage=1 00:06:38.141 --rc genhtml_function_coverage=1 00:06:38.141 --rc genhtml_legend=1 00:06:38.141 --rc geninfo_all_blocks=1 00:06:38.141 --rc geninfo_unexecuted_blocks=1 00:06:38.141 00:06:38.141 ' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.141 --rc genhtml_branch_coverage=1 00:06:38.141 --rc genhtml_function_coverage=1 00:06:38.141 --rc genhtml_legend=1 00:06:38.141 --rc geninfo_all_blocks=1 00:06:38.141 --rc geninfo_unexecuted_blocks=1 00:06:38.141 00:06:38.141 ' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61053 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61053 00:06:38.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61053 ']' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.141 03:57:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:38.141 03:57:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.141 [2024-10-13 03:57:31.136251] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:38.141 [2024-10-13 03:57:31.136402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61053 ] 00:06:38.141 [2024-10-13 03:57:31.289647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.403 [2024-10-13 03:57:31.417249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.976 03:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.976 03:57:32 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:38.976 03:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:38.976 03:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:38.976 03:57:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:39.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.548 Waiting for block devices as requested 00:06:39.548 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.548 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.810 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:39.810 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.099 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:45.099 BYT; 00:06:45.099 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:45.099 BYT; 00:06:45.099 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:45.099 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:45.100 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:45.100 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:45.100 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:45.100 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:45.100 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:45.100 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:45.100 03:57:37 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:45.100 03:57:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:45.100 03:57:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:45.100 03:57:38 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:45.100 03:57:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:45.100 03:57:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:46.038 The operation has completed successfully. 00:06:46.038 03:57:39 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:46.976 The operation has completed successfully. 00:06:46.976 03:57:40 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:47.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.808 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.808 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.808 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.808 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:48.068 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:48.068 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.068 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.068 [] 00:06:48.068 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.068 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:48.068 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:48.068 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:48.068 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:48.068 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:48.068 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.068 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.330 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.330 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:48.330 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.330 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.330 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.330 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:48.330 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:48.331 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "febe2459-2cf9-4ad2-922b-9cc36042879f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "febe2459-2cf9-4ad2-922b-9cc36042879f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9b2cb2c1-7042-451b-b528-a8b5014e8e3d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9b2cb2c1-7042-451b-b528-a8b5014e8e3d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "70c27c8c-efe4-4f32-82e4-fb4e0cac18ee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70c27c8c-efe4-4f32-82e4-fb4e0cac18ee",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4e7e63aa-a3d5-4e95-b831-5c31e54d0ce0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4e7e63aa-a3d5-4e95-b831-5c31e54d0ce0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5e6060ec-c70a-406e-9900-afe6f36fca7a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5e6060ec-c70a-406e-9900-afe6f36fca7a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:48.331 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:48.592 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:48.592 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:48.592 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:48.592 03:57:41 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61053 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61053 ']' 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61053 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61053 00:06:48.592 killing process with pid 61053 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61053' 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61053 00:06:48.592 03:57:41 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61053 00:06:50.506 03:57:43 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:50.506 03:57:43 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:50.506 03:57:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:50.506 03:57:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.506 03:57:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.506 ************************************ 00:06:50.506 START TEST bdev_hello_world 00:06:50.506 ************************************ 00:06:50.506 03:57:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:50.506 [2024-10-13 03:57:43.265680] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:50.506 [2024-10-13 03:57:43.265847] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61684 ] 00:06:50.506 [2024-10-13 03:57:43.420780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.506 [2024-10-13 03:57:43.549835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.078 [2024-10-13 03:57:44.137302] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:51.078 [2024-10-13 03:57:44.137373] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:51.078 [2024-10-13 03:57:44.137397] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:51.078 [2024-10-13 03:57:44.140170] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:51.078 [2024-10-13 03:57:44.141366] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:51.078 [2024-10-13 03:57:44.141414] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:51.078 [2024-10-13 03:57:44.142061] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:51.078 00:06:51.078 [2024-10-13 03:57:44.142096] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:52.023 00:06:52.023 real 0m1.738s 00:06:52.023 user 0m1.402s 00:06:52.023 sys 0m0.221s 00:06:52.023 03:57:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.023 ************************************ 00:06:52.023 END TEST bdev_hello_world 00:06:52.023 ************************************ 00:06:52.023 03:57:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:52.023 03:57:44 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:52.023 03:57:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.023 03:57:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.023 03:57:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.023 ************************************ 00:06:52.023 START TEST bdev_bounds 00:06:52.023 ************************************ 00:06:52.023 03:57:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:52.023 Process bdevio pid: 61726 00:06:52.023 03:57:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61726 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61726' 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61726 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61726 ']' 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:52.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.024 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:52.024 [2024-10-13 03:57:45.070466] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:52.024 [2024-10-13 03:57:45.070629] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61726 ] 00:06:52.285 [2024-10-13 03:57:45.225011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.285 [2024-10-13 03:57:45.360419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.285 [2024-10-13 03:57:45.360749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.285 [2024-10-13 03:57:45.360776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.857 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.857 03:57:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:52.857 03:57:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:53.118 I/O targets: 00:06:53.118 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:53.118 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:53.119 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:53.119 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.119 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.119 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:53.119 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:53.119 00:06:53.119 00:06:53.119 CUnit - A unit testing framework for C - Version 2.1-3 00:06:53.119 http://cunit.sourceforge.net/ 00:06:53.119 00:06:53.119 00:06:53.119 Suite: bdevio tests on: Nvme3n1 00:06:53.119 Test: blockdev write read block ...passed 00:06:53.119 Test: blockdev write zeroes read block ...passed 00:06:53.119 Test: blockdev write zeroes read no split ...passed 00:06:53.119 Test: blockdev write zeroes read split ...passed 00:06:53.119 Test: blockdev write zeroes read split partial ...passed 00:06:53.119 Test: blockdev reset ...[2024-10-13 03:57:46.129164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:53.119 [2024-10-13 03:57:46.134446] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:53.119 passed 00:06:53.119 Test: blockdev write read 8 blocks ...passed 00:06:53.119 Test: blockdev write read size > 128k ...passed 00:06:53.119 Test: blockdev write read invalid size ...passed 00:06:53.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.119 Test: blockdev write read max offset ...passed 00:06:53.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.119 Test: blockdev writev readv 8 blocks ...passed 00:06:53.119 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.119 Test: blockdev writev readv block ...passed 00:06:53.119 Test: blockdev writev readv size > 128k ...passed 00:06:53.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.119 Test: blockdev comparev and writev ...[2024-10-13 03:57:46.154499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1204000 len:0x1000 00:06:53.119 [2024-10-13 03:57:46.154561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.119 passed 00:06:53.119 Test: blockdev nvme passthru rw ...passed 00:06:53.119 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:57:46.157399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.119 [2024-10-13 03:57:46.157451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.119 passed 00:06:53.119 Test: blockdev nvme admin passthru ...passed 00:06:53.119 Test: blockdev copy ...passed 00:06:53.119 Suite: bdevio tests on: Nvme2n3 00:06:53.119 Test: blockdev write read block ...passed 00:06:53.119 Test: blockdev write zeroes read block ...passed 00:06:53.119 Test: blockdev write zeroes read no split ...passed 00:06:53.119 Test: blockdev write zeroes read split ...passed 00:06:53.119 Test: blockdev write zeroes read split partial ...passed 00:06:53.119 Test: blockdev reset ...[2024-10-13 03:57:46.214808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:53.119 [2024-10-13 03:57:46.218382] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:53.119 passed 00:06:53.119 Test: blockdev write read 8 blocks ...passed 00:06:53.119 Test: blockdev write read size > 128k ...passed 00:06:53.119 Test: blockdev write read invalid size ...passed 00:06:53.119 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.119 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.119 Test: blockdev write read max offset ...passed 00:06:53.119 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.119 Test: blockdev writev readv 8 blocks ...passed 00:06:53.119 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.119 Test: blockdev writev readv block ...passed 00:06:53.119 Test: blockdev writev readv size > 128k ...passed 00:06:53.119 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.119 Test: blockdev comparev and writev ...[2024-10-13 03:57:46.226230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1202000 len:0x1000 00:06:53.119 [2024-10-13 03:57:46.226282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.119 passed 00:06:53.119 Test: blockdev nvme passthru rw ...passed 00:06:53.119 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:57:46.227158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.119 [2024-10-13 03:57:46.227187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.119 passed 00:06:53.119 Test: blockdev nvme admin passthru ...passed 00:06:53.119 Test: blockdev copy ...passed 00:06:53.119 Suite: bdevio tests on: Nvme2n2 00:06:53.119 Test: blockdev write read block ...passed 00:06:53.119 Test: blockdev write zeroes read block ...passed 00:06:53.119 Test: blockdev write zeroes read no split ...passed 00:06:53.119 Test: blockdev write zeroes read split ...passed 00:06:53.379 Test: blockdev write zeroes read split partial ...passed 00:06:53.379 Test: blockdev reset ...[2024-10-13 03:57:46.293020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:53.379 [2024-10-13 03:57:46.297296] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:53.379 passed 00:06:53.379 Test: blockdev write read 8 blocks ...passed 00:06:53.379 Test: blockdev write read size > 128k ...passed 00:06:53.379 Test: blockdev write read invalid size ...passed 00:06:53.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.379 Test: blockdev write read max offset ...passed 00:06:53.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.379 Test: blockdev writev readv 8 blocks ...passed 00:06:53.379 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.379 Test: blockdev writev readv block ...passed 00:06:53.379 Test: blockdev writev readv size > 128k ...passed 00:06:53.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.379 Test: blockdev comparev and writev ...[2024-10-13 03:57:46.316750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6638000 len:0x1000 00:06:53.379 [2024-10-13 03:57:46.316809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.379 passed 00:06:53.379 Test: blockdev nvme passthru rw ...passed 00:06:53.379 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:57:46.319625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.379 [2024-10-13 03:57:46.319664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.379 passed 00:06:53.379 Test: blockdev nvme admin passthru ...passed 00:06:53.379 Test: blockdev copy ...passed 00:06:53.379 Suite: bdevio tests on: Nvme2n1 00:06:53.379 Test: blockdev write read block ...passed 00:06:53.379 Test: blockdev write zeroes read block ...passed 00:06:53.379 Test: blockdev write zeroes read no split ...passed 00:06:53.379 Test: blockdev write zeroes read split ...passed 00:06:53.379 Test: blockdev write zeroes read split partial ...passed 00:06:53.379 Test: blockdev reset ...[2024-10-13 03:57:46.380758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:53.379 [2024-10-13 03:57:46.385576] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:53.379 passed 00:06:53.379 Test: blockdev write read 8 blocks ...passed 00:06:53.379 Test: blockdev write read size > 128k ...passed 00:06:53.379 Test: blockdev write read invalid size ...passed 00:06:53.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.379 Test: blockdev write read max offset ...passed 00:06:53.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.379 Test: blockdev writev readv 8 blocks ...passed 00:06:53.379 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.379 Test: blockdev writev readv block ...passed 00:06:53.379 Test: blockdev writev readv size > 128k ...passed 00:06:53.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.379 Test: blockdev comparev and writev ...[2024-10-13 03:57:46.404074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6634000 len:0x1000 00:06:53.379 [2024-10-13 03:57:46.404149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.379 passed 00:06:53.379 Test: blockdev nvme passthru rw ...passed 00:06:53.379 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:57:46.406902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:53.379 [2024-10-13 03:57:46.406955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:53.379 passed 00:06:53.379 Test: blockdev nvme admin passthru ...passed 00:06:53.379 Test: blockdev copy ...passed 00:06:53.379 Suite: bdevio tests on: Nvme1n1p2 00:06:53.379 Test: blockdev write read block ...passed 00:06:53.379 Test: blockdev write zeroes read block ...passed 00:06:53.379 Test: blockdev write zeroes read no split ...passed 00:06:53.379 Test: blockdev write zeroes read split ...passed 00:06:53.379 Test: blockdev write zeroes read split partial ...passed 00:06:53.379 Test: blockdev reset ...[2024-10-13 03:57:46.469570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:53.379 [2024-10-13 03:57:46.473187] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:53.379 passed 00:06:53.379 Test: blockdev write read 8 blocks ...passed 00:06:53.379 Test: blockdev write read size > 128k ...passed 00:06:53.379 Test: blockdev write read invalid size ...passed 00:06:53.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.379 Test: blockdev write read max offset ...passed 00:06:53.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.379 Test: blockdev writev readv 8 blocks ...passed 00:06:53.379 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.379 Test: blockdev writev readv block ...passed 00:06:53.379 Test: blockdev writev readv size > 128k ...passed 00:06:53.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.379 Test: blockdev comparev and writev ...[2024-10-13 03:57:46.484352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d6630000 len:0x1000 00:06:53.379 [2024-10-13 03:57:46.484410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.379 passed 00:06:53.379 Test: blockdev nvme passthru rw ...passed 00:06:53.379 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.379 Test: blockdev nvme admin passthru ...passed 00:06:53.379 Test: blockdev copy ...passed 00:06:53.379 Suite: bdevio tests on: Nvme1n1p1 00:06:53.379 Test: blockdev write read block ...passed 00:06:53.379 Test: blockdev write zeroes read block ...passed 00:06:53.379 Test: blockdev write zeroes read no split ...passed 00:06:53.379 Test: blockdev write zeroes read split ...passed 00:06:53.379 Test: blockdev write zeroes read split partial ...passed 00:06:53.379 Test: blockdev reset ...[2024-10-13 03:57:46.535999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:53.640 [2024-10-13 03:57:46.540608] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:53.641 passed 00:06:53.641 Test: blockdev write read 8 blocks ...passed 00:06:53.641 Test: blockdev write read size > 128k ...passed 00:06:53.641 Test: blockdev write read invalid size ...passed 00:06:53.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.641 Test: blockdev write read max offset ...passed 00:06:53.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.641 Test: blockdev writev readv 8 blocks ...passed 00:06:53.641 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.641 Test: blockdev writev readv block ...passed 00:06:53.641 Test: blockdev writev readv size > 128k ...passed 00:06:53.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.641 Test: blockdev comparev and writev ...[2024-10-13 03:57:46.560083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b140e000 len:0x1000 00:06:53.641 [2024-10-13 03:57:46.560140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:53.641 passed 00:06:53.641 Test: blockdev nvme passthru rw ...passed 00:06:53.641 Test: blockdev nvme passthru vendor specific ...passed 00:06:53.641 Test: blockdev nvme admin passthru ...passed 00:06:53.641 Test: blockdev copy ...passed 00:06:53.641 Suite: bdevio tests on: Nvme0n1 00:06:53.641 Test: blockdev write read block ...passed 00:06:53.641 Test: blockdev write zeroes read block ...passed 00:06:53.641 Test: blockdev write zeroes read no split ...passed 00:06:53.641 Test: blockdev write zeroes read split ...passed 00:06:53.641 Test: blockdev write zeroes read split partial ...passed 00:06:53.641 Test: blockdev reset ...[2024-10-13 03:57:46.618268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:53.641 [2024-10-13 03:57:46.622119] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:53.641 passed 00:06:53.641 Test: blockdev write read 8 blocks ...passed 00:06:53.641 Test: blockdev write read size > 128k ...passed 00:06:53.641 Test: blockdev write read invalid size ...passed 00:06:53.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:53.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:53.641 Test: blockdev write read max offset ...passed 00:06:53.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:53.641 Test: blockdev writev readv 8 blocks ...passed 00:06:53.641 Test: blockdev writev readv 30 x 1block ...passed 00:06:53.641 Test: blockdev writev readv block ...passed 00:06:53.641 Test: blockdev writev readv size > 128k ...passed 00:06:53.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:53.641 Test: blockdev comparev and writev ...passed 00:06:53.641 Test: blockdev nvme passthru rw ...[2024-10-13 03:57:46.635056] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:53.641 separate metadata which is not supported yet. 00:06:53.641 passed 00:06:53.641 Test: blockdev nvme passthru vendor specific ...[2024-10-13 03:57:46.636045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:53.641 [2024-10-13 03:57:46.636098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:53.641 passed 00:06:53.641 Test: blockdev nvme admin passthru ...passed 00:06:53.641 Test: blockdev copy ...passed 00:06:53.641 00:06:53.641 Run Summary: Type Total Ran Passed Failed Inactive 00:06:53.641 suites 7 7 n/a 0 0 00:06:53.641 tests 161 161 161 0 0 00:06:53.641 asserts 1025 1025 1025 0 n/a 00:06:53.641 00:06:53.641 Elapsed time = 1.453 seconds 00:06:53.641 0 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61726 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61726 ']' 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61726 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61726 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.641 killing process with pid 61726 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61726' 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61726 00:06:53.641 03:57:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61726 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:54.583 00:06:54.583 real 0m2.427s 00:06:54.583 user 0m6.069s 00:06:54.583 sys 0m0.349s 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.583 ************************************ 00:06:54.583 END TEST bdev_bounds 00:06:54.583 ************************************ 00:06:54.583 03:57:47 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.583 03:57:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:54.583 03:57:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.583 03:57:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.583 ************************************ 00:06:54.583 START TEST bdev_nbd 00:06:54.583 ************************************ 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:54.583 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61785 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61785 /var/tmp/spdk-nbd.sock 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61785 ']' 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.584 03:57:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.584 [2024-10-13 03:57:47.567781] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:06:54.584 [2024-10-13 03:57:47.567928] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.584 [2024-10-13 03:57:47.725280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.845 [2024-10-13 03:57:47.849327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.417 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:55.678 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.679 1+0 records in 00:06:55.679 1+0 records out 00:06:55.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000922153 s, 4.4 MB/s 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.679 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.940 1+0 records in 00:06:55.940 1+0 records out 00:06:55.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660573 s, 6.2 MB/s 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.940 03:57:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.201 1+0 records in 00:06:56.201 1+0 records out 00:06:56.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106416 s, 3.8 MB/s 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.201 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.462 1+0 records in 00:06:56.462 1+0 records out 00:06:56.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803833 s, 5.1 MB/s 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.462 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.722 1+0 records in 00:06:56.722 1+0 records out 00:06:56.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742515 s, 5.5 MB/s 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.722 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.983 1+0 records in 00:06:56.983 1+0 records out 00:06:56.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121167 s, 3.4 MB/s 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.983 03:57:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.243 1+0 records in 00:06:57.243 1+0 records out 00:06:57.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479562 s, 8.5 MB/s 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.243 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.502 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd0", 00:06:57.502 "bdev_name": "Nvme0n1" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd1", 00:06:57.502 "bdev_name": "Nvme1n1p1" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd2", 00:06:57.502 "bdev_name": "Nvme1n1p2" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd3", 00:06:57.502 "bdev_name": "Nvme2n1" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd4", 00:06:57.502 "bdev_name": "Nvme2n2" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd5", 00:06:57.502 "bdev_name": "Nvme2n3" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd6", 00:06:57.502 "bdev_name": "Nvme3n1" 00:06:57.502 } 00:06:57.502 ]' 00:06:57.502 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:57.502 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:57.502 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd0", 00:06:57.502 "bdev_name": "Nvme0n1" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd1", 00:06:57.502 "bdev_name": "Nvme1n1p1" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd2", 00:06:57.502 "bdev_name": "Nvme1n1p2" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd3", 00:06:57.502 "bdev_name": "Nvme2n1" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd4", 00:06:57.502 "bdev_name": "Nvme2n2" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd5", 00:06:57.502 "bdev_name": "Nvme2n3" 00:06:57.502 }, 00:06:57.502 { 00:06:57.502 "nbd_device": "/dev/nbd6", 00:06:57.502 "bdev_name": "Nvme3n1" 00:06:57.502 } 00:06:57.502 ]' 00:06:57.503 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:57.503 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.503 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:57.503 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.503 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:57.503 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.503 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.762 03:57:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.022 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.282 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:58.542 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:58.542 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.543 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.803 03:57:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.063 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:59.322 /dev/nbd0 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.322 1+0 records in 00:06:59.322 1+0 records out 00:06:59.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272722 s, 15.0 MB/s 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.322 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:59.581 /dev/nbd1 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.581 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.581 1+0 records in 00:06:59.581 1+0 records out 00:06:59.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407067 s, 10.1 MB/s 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.582 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:59.840 /dev/nbd10 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:59.840 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.841 1+0 records in 00:06:59.841 1+0 records out 00:06:59.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048786 s, 8.4 MB/s 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.841 03:57:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:00.100 /dev/nbd11 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.100 1+0 records in 00:07:00.100 1+0 records out 00:07:00.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455855 s, 9.0 MB/s 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.100 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:00.100 /dev/nbd12 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.359 1+0 records in 00:07:00.359 1+0 records out 00:07:00.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046036 s, 8.9 MB/s 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:00.359 /dev/nbd13 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.359 1+0 records in 00:07:00.359 1+0 records out 00:07:00.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393614 s, 10.4 MB/s 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.359 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:00.619 /dev/nbd14 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.619 1+0 records in 00:07:00.619 1+0 records out 00:07:00.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114011 s, 3.6 MB/s 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.619 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd0", 00:07:00.878 "bdev_name": "Nvme0n1" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd1", 00:07:00.878 "bdev_name": "Nvme1n1p1" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd10", 00:07:00.878 "bdev_name": "Nvme1n1p2" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd11", 00:07:00.878 "bdev_name": "Nvme2n1" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd12", 00:07:00.878 "bdev_name": "Nvme2n2" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd13", 00:07:00.878 "bdev_name": "Nvme2n3" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd14", 00:07:00.878 "bdev_name": "Nvme3n1" 00:07:00.878 } 00:07:00.878 ]' 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd0", 00:07:00.878 "bdev_name": "Nvme0n1" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd1", 00:07:00.878 "bdev_name": "Nvme1n1p1" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd10", 00:07:00.878 "bdev_name": "Nvme1n1p2" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd11", 00:07:00.878 "bdev_name": "Nvme2n1" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd12", 00:07:00.878 "bdev_name": "Nvme2n2" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd13", 00:07:00.878 "bdev_name": "Nvme2n3" 00:07:00.878 }, 00:07:00.878 { 00:07:00.878 "nbd_device": "/dev/nbd14", 00:07:00.878 "bdev_name": "Nvme3n1" 00:07:00.878 } 00:07:00.878 ]' 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.878 /dev/nbd1 00:07:00.878 /dev/nbd10 00:07:00.878 /dev/nbd11 00:07:00.878 /dev/nbd12 00:07:00.878 /dev/nbd13 00:07:00.878 /dev/nbd14' 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.878 /dev/nbd1 00:07:00.878 /dev/nbd10 00:07:00.878 /dev/nbd11 00:07:00.878 /dev/nbd12 00:07:00.878 /dev/nbd13 00:07:00.878 /dev/nbd14' 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:00.878 256+0 records in 00:07:00.878 256+0 records out 00:07:00.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0071478 s, 147 MB/s 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.878 03:57:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.137 256+0 records in 00:07:01.137 256+0 records out 00:07:01.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102226 s, 10.3 MB/s 00:07:01.137 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.137 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.137 256+0 records in 00:07:01.137 256+0 records out 00:07:01.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171521 s, 6.1 MB/s 00:07:01.137 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.137 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:01.398 256+0 records in 00:07:01.398 256+0 records out 00:07:01.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10547 s, 9.9 MB/s 00:07:01.398 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.398 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:01.660 256+0 records in 00:07:01.660 256+0 records out 00:07:01.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210219 s, 5.0 MB/s 00:07:01.660 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.660 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:01.660 256+0 records in 00:07:01.660 256+0 records out 00:07:01.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136576 s, 7.7 MB/s 00:07:01.660 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.660 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:01.921 256+0 records in 00:07:01.922 256+0 records out 00:07:01.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108558 s, 9.7 MB/s 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:01.922 256+0 records in 00:07:01.922 256+0 records out 00:07:01.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112832 s, 9.3 MB/s 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.922 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.181 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.442 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.443 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.443 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.704 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.966 03:57:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:03.231 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:03.231 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:03.231 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:03.231 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.232 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.492 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:03.754 03:57:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:04.014 malloc_lvol_verify 00:07:04.014 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:04.274 f6816e7b-fa6c-481d-a6e4-e81974bf6f09 00:07:04.274 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:04.535 8234a962-33f9-4b3a-9320-5064ade2de10 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:04.535 /dev/nbd0 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:04.535 mke2fs 1.47.0 (5-Feb-2023) 00:07:04.535 Discarding device blocks: 0/4096 done 00:07:04.535 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:04.535 00:07:04.535 Allocating group tables: 0/1 done 00:07:04.535 Writing inode tables: 0/1 done 00:07:04.535 Creating journal (1024 blocks): done 00:07:04.535 Writing superblocks and filesystem accounting information: 0/1 done 00:07:04.535 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.535 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61785 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61785 ']' 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61785 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61785 00:07:04.794 killing process with pid 61785 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61785' 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61785 00:07:04.794 03:57:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61785 00:07:05.733 03:57:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:05.733 00:07:05.733 real 0m11.177s 00:07:05.733 user 0m15.629s 00:07:05.733 sys 0m3.695s 00:07:05.733 03:57:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.733 ************************************ 00:07:05.733 END TEST bdev_nbd 00:07:05.733 ************************************ 00:07:05.733 03:57:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:05.733 03:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:05.733 03:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:05.733 03:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:05.733 skipping fio tests on NVMe due to multi-ns failures. 00:07:05.733 03:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:05.733 03:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.733 03:57:58 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.733 03:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:05.733 03:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.733 03:57:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.733 ************************************ 00:07:05.733 START TEST bdev_verify 00:07:05.733 ************************************ 00:07:05.733 03:57:58 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.733 [2024-10-13 03:57:58.770422] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:05.733 [2024-10-13 03:57:58.770531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62198 ] 00:07:05.993 [2024-10-13 03:57:58.919868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.993 [2024-10-13 03:57:59.014185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.993 [2024-10-13 03:57:59.014262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.562 Running I/O for 5 seconds... 00:07:08.893 21248.00 IOPS, 83.00 MiB/s [2024-10-13T03:58:02.993Z] 22784.00 IOPS, 89.00 MiB/s [2024-10-13T03:58:03.927Z] 23850.67 IOPS, 93.17 MiB/s [2024-10-13T03:58:04.861Z] 24048.00 IOPS, 93.94 MiB/s [2024-10-13T03:58:04.861Z] 23769.60 IOPS, 92.85 MiB/s 00:07:11.701 Latency(us) 00:07:11.701 [2024-10-13T03:58:04.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.701 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x0 length 0xbd0bd 00:07:11.701 Nvme0n1 : 5.06 1668.57 6.52 0.00 0.00 76472.25 14619.57 84289.38 00:07:11.701 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:11.701 Nvme0n1 : 5.05 1699.13 6.64 0.00 0.00 75117.27 13308.85 82272.89 00:07:11.701 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x0 length 0x4ff80 00:07:11.701 Nvme1n1p1 : 5.06 1668.10 6.52 0.00 0.00 76299.50 15627.82 73803.62 00:07:11.701 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:11.701 Nvme1n1p1 : 5.05 1697.98 6.63 0.00 0.00 75026.88 15022.87 73400.32 00:07:11.701 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x0 length 0x4ff7f 00:07:11.701 Nvme1n1p2 : 5.07 1667.59 6.51 0.00 0.00 76172.96 17039.36 64124.46 00:07:11.701 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:11.701 Nvme1n1p2 : 5.05 1697.50 6.63 0.00 0.00 74888.66 16232.76 66544.25 00:07:11.701 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x0 length 0x80000 00:07:11.701 Nvme2n1 : 5.07 1666.49 6.51 0.00 0.00 76049.28 17745.13 60494.77 00:07:11.701 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x80000 length 0x80000 00:07:11.701 Nvme2n1 : 5.05 1697.04 6.63 0.00 0.00 74767.93 17341.83 61704.66 00:07:11.701 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x0 length 0x80000 00:07:11.701 Nvme2n2 : 5.07 1665.29 6.51 0.00 0.00 75917.28 17039.36 63721.16 00:07:11.701 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x80000 length 0x80000 00:07:11.701 Nvme2n2 : 5.07 1704.30 6.66 0.00 0.00 74296.01 2936.52 60494.77 00:07:11.701 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x0 length 0x80000 00:07:11.701 Nvme2n3 : 5.09 1683.57 6.58 0.00 0.00 75050.85 7360.20 66947.54 00:07:11.701 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x80000 length 0x80000 00:07:11.701 Nvme2n3 : 5.08 1712.41 6.69 0.00 0.00 73858.18 8771.74 63721.16 00:07:11.701 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x0 length 0x20000 00:07:11.701 Nvme3n1 : 5.10 1683.13 6.57 0.00 0.00 74931.68 7713.08 68157.44 00:07:11.701 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.701 Verification LBA range: start 0x20000 length 0x20000 00:07:11.701 Nvme3n1 : 5.08 1711.96 6.69 0.00 0.00 73714.99 8922.98 66947.54 00:07:11.701 [2024-10-13T03:58:04.861Z] =================================================================================================================== 00:07:11.701 [2024-10-13T03:58:04.861Z] Total : 23623.04 92.28 0.00 0.00 75174.32 2936.52 84289.38 00:07:13.073 00:07:13.073 real 0m7.203s 00:07:13.073 user 0m13.556s 00:07:13.073 sys 0m0.190s 00:07:13.073 03:58:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.073 ************************************ 00:07:13.073 END TEST bdev_verify 00:07:13.073 ************************************ 00:07:13.073 03:58:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.073 03:58:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.073 03:58:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:13.073 03:58:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.073 03:58:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.073 ************************************ 00:07:13.073 START TEST bdev_verify_big_io 00:07:13.073 ************************************ 00:07:13.073 03:58:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.073 [2024-10-13 03:58:06.017672] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:13.073 [2024-10-13 03:58:06.017805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62296 ] 00:07:13.073 [2024-10-13 03:58:06.168504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.331 [2024-10-13 03:58:06.262962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.331 [2024-10-13 03:58:06.263141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.897 Running I/O for 5 seconds... 00:07:19.112 1727.00 IOPS, 107.94 MiB/s [2024-10-13T03:58:13.206Z] 2746.00 IOPS, 171.62 MiB/s [2024-10-13T03:58:13.206Z] 3412.33 IOPS, 213.27 MiB/s 00:07:20.046 Latency(us) 00:07:20.046 [2024-10-13T03:58:13.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.046 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x0 length 0xbd0b 00:07:20.046 Nvme0n1 : 5.99 108.68 6.79 0.00 0.00 1103031.09 12250.19 1348630.06 00:07:20.046 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:20.046 Nvme0n1 : 5.78 114.63 7.16 0.00 0.00 1057410.26 14720.39 1297007.85 00:07:20.046 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x0 length 0x4ff8 00:07:20.046 Nvme1n1p1 : 5.99 110.12 6.88 0.00 0.00 1058685.96 57671.68 1142141.24 00:07:20.046 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:20.046 Nvme1n1p1 : 5.88 118.91 7.43 0.00 0.00 996528.86 62914.56 1090519.04 00:07:20.046 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x0 length 0x4ff7 00:07:20.046 Nvme1n1p2 : 6.08 109.28 6.83 0.00 0.00 1043147.87 103244.41 1587382.74 00:07:20.046 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:20.046 Nvme1n1p2 : 6.03 123.47 7.72 0.00 0.00 929354.48 103244.41 877577.45 00:07:20.046 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x0 length 0x8000 00:07:20.046 Nvme2n1 : 6.14 112.50 7.03 0.00 0.00 981779.91 80659.69 1613193.85 00:07:20.046 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x8000 length 0x8000 00:07:20.046 Nvme2n1 : 5.99 122.97 7.69 0.00 0.00 899540.23 102034.51 987274.63 00:07:20.046 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x0 length 0x8000 00:07:20.046 Nvme2n2 : 6.14 116.05 7.25 0.00 0.00 926745.64 63721.16 1626099.40 00:07:20.046 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x8000 length 0x8000 00:07:20.046 Nvme2n2 : 6.07 131.39 8.21 0.00 0.00 823731.55 37708.41 1051802.39 00:07:20.046 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x0 length 0x8000 00:07:20.046 Nvme2n3 : 6.21 126.47 7.90 0.00 0.00 824299.02 17946.78 1664816.05 00:07:20.046 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x8000 length 0x8000 00:07:20.046 Nvme2n3 : 6.11 136.08 8.51 0.00 0.00 773057.15 29844.09 1071160.71 00:07:20.046 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x0 length 0x2000 00:07:20.046 Nvme3n1 : 6.23 150.92 9.43 0.00 0.00 672142.62 3982.57 1264743.98 00:07:20.046 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.046 Verification LBA range: start 0x2000 length 0x2000 00:07:20.046 Nvme3n1 : 6.18 155.11 9.69 0.00 0.00 662617.77 1966.08 1084066.26 00:07:20.046 [2024-10-13T03:58:13.206Z] =================================================================================================================== 00:07:20.046 [2024-10-13T03:58:13.206Z] Total : 1736.58 108.54 0.00 0.00 893624.68 1966.08 1664816.05 00:07:21.945 00:07:21.945 real 0m8.765s 00:07:21.945 user 0m16.642s 00:07:21.945 sys 0m0.207s 00:07:21.945 03:58:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.945 03:58:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:21.945 ************************************ 00:07:21.945 END TEST bdev_verify_big_io 00:07:21.945 ************************************ 00:07:21.945 03:58:14 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.945 03:58:14 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:21.945 03:58:14 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.945 03:58:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.945 ************************************ 00:07:21.945 START TEST bdev_write_zeroes 00:07:21.945 ************************************ 00:07:21.945 03:58:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.945 [2024-10-13 03:58:14.837081] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:21.945 [2024-10-13 03:58:14.837200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62411 ] 00:07:21.945 [2024-10-13 03:58:14.987880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.945 [2024-10-13 03:58:15.085102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.513 Running I/O for 1 seconds... 00:07:23.897 63104.00 IOPS, 246.50 MiB/s 00:07:23.897 Latency(us) 00:07:23.897 [2024-10-13T03:58:17.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.897 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.897 Nvme0n1 : 1.03 8961.34 35.01 0.00 0.00 14250.55 7309.78 31053.98 00:07:23.897 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.897 Nvme1n1p1 : 1.03 8950.28 34.96 0.00 0.00 14250.31 11141.12 31457.28 00:07:23.897 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.897 Nvme1n1p2 : 1.03 8939.15 34.92 0.00 0.00 14216.96 11141.12 30247.38 00:07:23.897 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.897 Nvme2n1 : 1.03 8929.10 34.88 0.00 0.00 14121.10 9729.58 28029.24 00:07:23.897 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.897 Nvme2n2 : 1.03 8919.09 34.84 0.00 0.00 14114.54 9679.16 28634.19 00:07:23.897 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.897 Nvme2n3 : 1.03 8909.11 34.80 0.00 0.00 14091.16 7561.85 29844.09 00:07:23.897 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:23.897 Nvme3n1 : 1.04 8837.30 34.52 0.00 0.00 14181.74 10586.58 31457.28 00:07:23.897 [2024-10-13T03:58:17.057Z] =================================================================================================================== 00:07:23.897 [2024-10-13T03:58:17.057Z] Total : 62445.36 243.93 0.00 0.00 14175.19 7309.78 31457.28 00:07:24.468 00:07:24.468 real 0m2.694s 00:07:24.468 user 0m2.391s 00:07:24.468 sys 0m0.188s 00:07:24.468 03:58:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.468 ************************************ 00:07:24.468 END TEST bdev_write_zeroes 00:07:24.468 ************************************ 00:07:24.468 03:58:17 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:24.468 03:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.468 03:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:24.468 03:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.468 03:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.468 ************************************ 00:07:24.468 START TEST bdev_json_nonenclosed 00:07:24.468 ************************************ 00:07:24.468 03:58:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.468 [2024-10-13 03:58:17.603598] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:24.468 [2024-10-13 03:58:17.603770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62465 ] 00:07:24.729 [2024-10-13 03:58:17.760389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.990 [2024-10-13 03:58:17.892147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.990 [2024-10-13 03:58:17.892261] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:24.990 [2024-10-13 03:58:17.892279] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:24.990 [2024-10-13 03:58:17.892290] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.990 00:07:24.990 real 0m0.553s 00:07:24.990 user 0m0.335s 00:07:24.990 sys 0m0.112s 00:07:24.990 03:58:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.990 03:58:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:24.990 ************************************ 00:07:24.990 END TEST bdev_json_nonenclosed 00:07:24.990 ************************************ 00:07:24.990 03:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:24.990 03:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:24.990 03:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.990 03:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.250 ************************************ 00:07:25.250 START TEST bdev_json_nonarray 00:07:25.250 ************************************ 00:07:25.250 03:58:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.250 [2024-10-13 03:58:18.225656] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:25.250 [2024-10-13 03:58:18.225801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62485 ] 00:07:25.250 [2024-10-13 03:58:18.381054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.511 [2024-10-13 03:58:18.510892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.511 [2024-10-13 03:58:18.511000] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:25.511 [2024-10-13 03:58:18.511021] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.511 [2024-10-13 03:58:18.511031] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.772 00:07:25.772 real 0m0.552s 00:07:25.772 user 0m0.337s 00:07:25.772 sys 0m0.108s 00:07:25.772 03:58:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.772 03:58:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:25.773 ************************************ 00:07:25.773 END TEST bdev_json_nonarray 00:07:25.773 ************************************ 00:07:25.773 03:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:25.773 03:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:25.773 03:58:18 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:25.773 03:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.773 03:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.773 03:58:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.773 ************************************ 00:07:25.773 START TEST bdev_gpt_uuid 00:07:25.773 ************************************ 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62516 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62516 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62516 ']' 00:07:25.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:25.773 03:58:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:25.773 [2024-10-13 03:58:18.864436] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:25.773 [2024-10-13 03:58:18.864602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62516 ] 00:07:26.064 [2024-10-13 03:58:19.020937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.065 [2024-10-13 03:58:19.148470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.007 03:58:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.007 03:58:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:27.007 03:58:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.007 03:58:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.007 03:58:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.007 Some configs were skipped because the RPC state that can call them passed over. 00:07:27.007 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.007 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:27.007 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.007 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:27.268 { 00:07:27.268 "name": "Nvme1n1p1", 00:07:27.268 "aliases": [ 00:07:27.268 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:27.268 ], 00:07:27.268 "product_name": "GPT Disk", 00:07:27.268 "block_size": 4096, 00:07:27.268 "num_blocks": 655104, 00:07:27.268 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:27.268 "assigned_rate_limits": { 00:07:27.268 "rw_ios_per_sec": 0, 00:07:27.268 "rw_mbytes_per_sec": 0, 00:07:27.268 "r_mbytes_per_sec": 0, 00:07:27.268 "w_mbytes_per_sec": 0 00:07:27.268 }, 00:07:27.268 "claimed": false, 00:07:27.268 "zoned": false, 00:07:27.268 "supported_io_types": { 00:07:27.268 "read": true, 00:07:27.268 "write": true, 00:07:27.268 "unmap": true, 00:07:27.268 "flush": true, 00:07:27.268 "reset": true, 00:07:27.268 "nvme_admin": false, 00:07:27.268 "nvme_io": false, 00:07:27.268 "nvme_io_md": false, 00:07:27.268 "write_zeroes": true, 00:07:27.268 "zcopy": false, 00:07:27.268 "get_zone_info": false, 00:07:27.268 "zone_management": false, 00:07:27.268 "zone_append": false, 00:07:27.268 "compare": true, 00:07:27.268 "compare_and_write": false, 00:07:27.268 "abort": true, 00:07:27.268 "seek_hole": false, 00:07:27.268 "seek_data": false, 00:07:27.268 "copy": true, 00:07:27.268 "nvme_iov_md": false 00:07:27.268 }, 00:07:27.268 "driver_specific": { 00:07:27.268 "gpt": { 00:07:27.268 "base_bdev": "Nvme1n1", 00:07:27.268 "offset_blocks": 256, 00:07:27.268 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:27.268 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:27.268 "partition_name": "SPDK_TEST_first" 00:07:27.268 } 00:07:27.268 } 00:07:27.268 } 00:07:27.268 ]' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:27.268 { 00:07:27.268 "name": "Nvme1n1p2", 00:07:27.268 "aliases": [ 00:07:27.268 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:27.268 ], 00:07:27.268 "product_name": "GPT Disk", 00:07:27.268 "block_size": 4096, 00:07:27.268 "num_blocks": 655103, 00:07:27.268 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:27.268 "assigned_rate_limits": { 00:07:27.268 "rw_ios_per_sec": 0, 00:07:27.268 "rw_mbytes_per_sec": 0, 00:07:27.268 "r_mbytes_per_sec": 0, 00:07:27.268 "w_mbytes_per_sec": 0 00:07:27.268 }, 00:07:27.268 "claimed": false, 00:07:27.268 "zoned": false, 00:07:27.268 "supported_io_types": { 00:07:27.268 "read": true, 00:07:27.268 "write": true, 00:07:27.268 "unmap": true, 00:07:27.268 "flush": true, 00:07:27.268 "reset": true, 00:07:27.268 "nvme_admin": false, 00:07:27.268 "nvme_io": false, 00:07:27.268 "nvme_io_md": false, 00:07:27.268 "write_zeroes": true, 00:07:27.268 "zcopy": false, 00:07:27.268 "get_zone_info": false, 00:07:27.268 "zone_management": false, 00:07:27.268 "zone_append": false, 00:07:27.268 "compare": true, 00:07:27.268 "compare_and_write": false, 00:07:27.268 "abort": true, 00:07:27.268 "seek_hole": false, 00:07:27.268 "seek_data": false, 00:07:27.268 "copy": true, 00:07:27.268 "nvme_iov_md": false 00:07:27.268 }, 00:07:27.268 "driver_specific": { 00:07:27.268 "gpt": { 00:07:27.268 "base_bdev": "Nvme1n1", 00:07:27.268 "offset_blocks": 655360, 00:07:27.268 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:27.268 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:27.268 "partition_name": "SPDK_TEST_second" 00:07:27.268 } 00:07:27.268 } 00:07:27.268 } 00:07:27.268 ]' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62516 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62516 ']' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62516 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62516 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.268 killing process with pid 62516 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62516' 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62516 00:07:27.268 03:58:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62516 00:07:29.176 00:07:29.176 real 0m3.146s 00:07:29.176 user 0m3.217s 00:07:29.176 sys 0m0.448s 00:07:29.176 03:58:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.176 03:58:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:29.176 ************************************ 00:07:29.176 END TEST bdev_gpt_uuid 00:07:29.176 ************************************ 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:29.176 03:58:21 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:29.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.436 Waiting for block devices as requested 00:07:29.436 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.436 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.436 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.695 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.960 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:34.960 03:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:34.960 03:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:34.960 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:34.960 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:34.960 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:34.960 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:34.960 03:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:34.960 00:07:34.960 real 0m57.051s 00:07:34.960 user 1m12.703s 00:07:34.960 sys 0m8.161s 00:07:34.960 03:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.960 03:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.960 ************************************ 00:07:34.960 END TEST blockdev_nvme_gpt 00:07:34.960 ************************************ 00:07:34.960 03:58:27 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:34.960 03:58:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.960 03:58:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.960 03:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:34.960 ************************************ 00:07:34.960 START TEST nvme 00:07:34.960 ************************************ 00:07:34.960 03:58:27 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:34.960 * Looking for test storage... 00:07:34.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.960 03:58:28 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.960 03:58:28 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.960 03:58:28 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.960 03:58:28 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.960 03:58:28 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.960 03:58:28 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.960 03:58:28 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.960 03:58:28 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:34.960 03:58:28 nvme -- scripts/common.sh@345 -- # : 1 00:07:34.960 03:58:28 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.960 03:58:28 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.960 03:58:28 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:34.960 03:58:28 nvme -- scripts/common.sh@353 -- # local d=1 00:07:34.960 03:58:28 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.960 03:58:28 nvme -- scripts/common.sh@355 -- # echo 1 00:07:34.960 03:58:28 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.960 03:58:28 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@353 -- # local d=2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.960 03:58:28 nvme -- scripts/common.sh@355 -- # echo 2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.960 03:58:28 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.960 03:58:28 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.960 03:58:28 nvme -- scripts/common.sh@368 -- # return 0 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:34.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.960 --rc genhtml_branch_coverage=1 00:07:34.960 --rc genhtml_function_coverage=1 00:07:34.960 --rc genhtml_legend=1 00:07:34.960 --rc geninfo_all_blocks=1 00:07:34.960 --rc geninfo_unexecuted_blocks=1 00:07:34.960 00:07:34.960 ' 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:34.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.960 --rc genhtml_branch_coverage=1 00:07:34.960 --rc genhtml_function_coverage=1 00:07:34.960 --rc genhtml_legend=1 00:07:34.960 --rc geninfo_all_blocks=1 00:07:34.960 --rc geninfo_unexecuted_blocks=1 00:07:34.960 00:07:34.960 ' 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:34.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.960 --rc genhtml_branch_coverage=1 00:07:34.960 --rc genhtml_function_coverage=1 00:07:34.960 --rc genhtml_legend=1 00:07:34.960 --rc geninfo_all_blocks=1 00:07:34.960 --rc geninfo_unexecuted_blocks=1 00:07:34.960 00:07:34.960 ' 00:07:34.960 03:58:28 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:34.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.960 --rc genhtml_branch_coverage=1 00:07:34.960 --rc genhtml_function_coverage=1 00:07:34.960 --rc genhtml_legend=1 00:07:34.960 --rc geninfo_all_blocks=1 00:07:34.960 --rc geninfo_unexecuted_blocks=1 00:07:34.960 00:07:34.960 ' 00:07:34.960 03:58:28 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:35.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.792 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.792 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:35.792 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.052 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.052 03:58:29 nvme -- nvme/nvme.sh@79 -- # uname 00:07:36.052 03:58:29 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:36.052 03:58:29 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:36.052 03:58:29 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1071 -- # stubpid=63156 00:07:36.052 Waiting for stub to ready for secondary processes... 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63156 ]] 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:36.052 03:58:29 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:36.052 [2024-10-13 03:58:29.097206] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:07:36.052 [2024-10-13 03:58:29.098016] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:36.992 [2024-10-13 03:58:29.848910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.992 [2024-10-13 03:58:29.965320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.992 [2024-10-13 03:58:29.965494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.992 [2024-10-13 03:58:29.965504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.992 [2024-10-13 03:58:29.990822] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:36.993 [2024-10-13 03:58:29.990912] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.993 [2024-10-13 03:58:30.006089] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:36.993 [2024-10-13 03:58:30.006412] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:36.993 [2024-10-13 03:58:30.012561] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.993 [2024-10-13 03:58:30.013285] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:36.993 [2024-10-13 03:58:30.013585] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:36.993 [2024-10-13 03:58:30.017896] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.993 [2024-10-13 03:58:30.018132] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:36.993 [2024-10-13 03:58:30.018213] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:36.993 [2024-10-13 03:58:30.021359] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:36.993 [2024-10-13 03:58:30.021599] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:36.993 [2024-10-13 03:58:30.021710] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:36.993 [2024-10-13 03:58:30.021772] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:36.993 [2024-10-13 03:58:30.021832] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:36.993 done. 00:07:36.993 03:58:30 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:36.993 03:58:30 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:36.993 03:58:30 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:36.993 03:58:30 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:36.993 03:58:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.993 03:58:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.993 ************************************ 00:07:36.993 START TEST nvme_reset 00:07:36.993 ************************************ 00:07:36.993 03:58:30 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:37.254 Initializing NVMe Controllers 00:07:37.254 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:37.254 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:37.254 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:37.254 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:37.254 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:37.254 00:07:37.254 real 0m0.218s 00:07:37.254 user 0m0.065s 00:07:37.254 sys 0m0.107s 00:07:37.254 03:58:30 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.254 03:58:30 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:37.254 ************************************ 00:07:37.254 END TEST nvme_reset 00:07:37.254 ************************************ 00:07:37.254 03:58:30 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:37.254 03:58:30 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.254 03:58:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.254 03:58:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.254 ************************************ 00:07:37.254 START TEST nvme_identify 00:07:37.254 ************************************ 00:07:37.254 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:37.254 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:37.254 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:37.254 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:37.254 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:37.254 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:37.254 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:37.254 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:37.254 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:37.254 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:37.523 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:37.523 03:58:30 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:37.523 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:37.523 [2024-10-13 03:58:30.598873] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63177 terminated unexpected 00:07:37.523 ===================================================== 00:07:37.523 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:37.523 ===================================================== 00:07:37.523 Controller Capabilities/Features 00:07:37.523 ================================ 00:07:37.523 Vendor ID: 1b36 00:07:37.523 Subsystem Vendor ID: 1af4 00:07:37.523 Serial Number: 12343 00:07:37.523 Model Number: QEMU NVMe Ctrl 00:07:37.523 Firmware Version: 8.0.0 00:07:37.523 Recommended Arb Burst: 6 00:07:37.523 IEEE OUI Identifier: 00 54 52 00:07:37.523 Multi-path I/O 00:07:37.523 May have multiple subsystem ports: No 00:07:37.523 May have multiple controllers: Yes 00:07:37.523 Associated with SR-IOV VF: No 00:07:37.523 Max Data Transfer Size: 524288 00:07:37.523 Max Number of Namespaces: 256 00:07:37.523 Max Number of I/O Queues: 64 00:07:37.523 NVMe Specification Version (VS): 1.4 00:07:37.523 NVMe Specification Version (Identify): 1.4 00:07:37.523 Maximum Queue Entries: 2048 00:07:37.523 Contiguous Queues Required: Yes 00:07:37.523 Arbitration Mechanisms Supported 00:07:37.523 Weighted Round Robin: Not Supported 00:07:37.523 Vendor Specific: Not Supported 00:07:37.523 Reset Timeout: 7500 ms 00:07:37.523 Doorbell Stride: 4 bytes 00:07:37.523 NVM Subsystem Reset: Not Supported 00:07:37.523 Command Sets Supported 00:07:37.523 NVM Command Set: Supported 00:07:37.523 Boot Partition: Not Supported 00:07:37.523 Memory Page Size Minimum: 4096 bytes 00:07:37.523 Memory Page Size Maximum: 65536 bytes 00:07:37.523 Persistent Memory Region: Not Supported 00:07:37.523 Optional Asynchronous Events Supported 00:07:37.523 Namespace Attribute Notices: Supported 00:07:37.523 Firmware Activation Notices: Not Supported 00:07:37.523 ANA Change Notices: Not Supported 00:07:37.523 PLE Aggregate Log Change Notices: Not Supported 00:07:37.523 LBA Status Info Alert Notices: Not Supported 00:07:37.523 EGE Aggregate Log Change Notices: Not Supported 00:07:37.523 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.523 Zone Descriptor Change Notices: Not Supported 00:07:37.523 Discovery Log Change Notices: Not Supported 00:07:37.523 Controller Attributes 00:07:37.523 128-bit Host Identifier: Not Supported 00:07:37.523 Non-Operational Permissive Mode: Not Supported 00:07:37.523 NVM Sets: Not Supported 00:07:37.523 Read Recovery Levels: Not Supported 00:07:37.523 Endurance Groups: Supported 00:07:37.523 Predictable Latency Mode: Not Supported 00:07:37.523 Traffic Based Keep ALive: Not Supported 00:07:37.523 Namespace Granularity: Not Supported 00:07:37.523 SQ Associations: Not Supported 00:07:37.523 UUID List: Not Supported 00:07:37.523 Multi-Domain Subsystem: Not Supported 00:07:37.523 Fixed Capacity Management: Not Supported 00:07:37.523 Variable Capacity Management: Not Supported 00:07:37.523 Delete Endurance Group: Not Supported 00:07:37.523 Delete NVM Set: Not Supported 00:07:37.523 Extended LBA Formats Supported: Supported 00:07:37.523 Flexible Data Placement Supported: Supported 00:07:37.523 00:07:37.523 Controller Memory Buffer Support 00:07:37.523 ================================ 00:07:37.523 Supported: No 00:07:37.523 00:07:37.523 Persistent Memory Region Support 00:07:37.523 ================================ 00:07:37.523 Supported: No 00:07:37.523 00:07:37.523 Admin Command Set Attributes 00:07:37.523 ============================ 00:07:37.523 Security Send/Receive: Not Supported 00:07:37.523 Format NVM: Supported 00:07:37.523 Firmware Activate/Download: Not Supported 00:07:37.523 Namespace Management: Supported 00:07:37.523 Device Self-Test: Not Supported 00:07:37.524 Directives: Supported 00:07:37.524 NVMe-MI: Not Supported 00:07:37.524 Virtualization Management: Not Supported 00:07:37.524 Doorbell Buffer Config: Supported 00:07:37.524 Get LBA Status Capability: Not Supported 00:07:37.524 Command & Feature Lockdown Capability: Not Supported 00:07:37.524 Abort Command Limit: 4 00:07:37.524 Async Event Request Limit: 4 00:07:37.524 Number of Firmware Slots: N/A 00:07:37.524 Firmware Slot 1 Read-Only: N/A 00:07:37.524 Firmware Activation Without Reset: N/A 00:07:37.524 Multiple Update Detection Support: N/A 00:07:37.524 Firmware Update Granularity: No Information Provided 00:07:37.524 Per-Namespace SMART Log: Yes 00:07:37.524 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.524 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:37.524 Command Effects Log Page: Supported 00:07:37.524 Get Log Page Extended Data: Supported 00:07:37.524 Telemetry Log Pages: Not Supported 00:07:37.524 Persistent Event Log Pages: Not Supported 00:07:37.524 Supported Log Pages Log Page: May Support 00:07:37.524 Commands Supported & Effects Log Page: Not Supported 00:07:37.524 Feature Identifiers & Effects Log Page:May Support 00:07:37.524 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.524 Data Area 4 for Telemetry Log: Not Supported 00:07:37.524 Error Log Page Entries Supported: 1 00:07:37.524 Keep Alive: Not Supported 00:07:37.524 00:07:37.524 NVM Command Set Attributes 00:07:37.524 ========================== 00:07:37.524 Submission Queue Entry Size 00:07:37.524 Max: 64 00:07:37.524 Min: 64 00:07:37.524 Completion Queue Entry Size 00:07:37.524 Max: 16 00:07:37.524 Min: 16 00:07:37.524 Number of Namespaces: 256 00:07:37.524 Compare Command: Supported 00:07:37.524 Write Uncorrectable Command: Not Supported 00:07:37.524 Dataset Management Command: Supported 00:07:37.524 Write Zeroes Command: Supported 00:07:37.524 Set Features Save Field: Supported 00:07:37.524 Reservations: Not Supported 00:07:37.524 Timestamp: Supported 00:07:37.524 Copy: Supported 00:07:37.524 Volatile Write Cache: Present 00:07:37.524 Atomic Write Unit (Normal): 1 00:07:37.524 Atomic Write Unit (PFail): 1 00:07:37.524 Atomic Compare & Write Unit: 1 00:07:37.524 Fused Compare & Write: Not Supported 00:07:37.524 Scatter-Gather List 00:07:37.524 SGL Command Set: Supported 00:07:37.524 SGL Keyed: Not Supported 00:07:37.524 SGL Bit Bucket Descriptor: Not Supported 00:07:37.524 SGL Metadata Pointer: Not Supported 00:07:37.524 Oversized SGL: Not Supported 00:07:37.524 SGL Metadata Address: Not Supported 00:07:37.524 SGL Offset: Not Supported 00:07:37.524 Transport SGL Data Block: Not Supported 00:07:37.524 Replay Protected Memory Block: Not Supported 00:07:37.524 00:07:37.524 Firmware Slot Information 00:07:37.524 ========================= 00:07:37.524 Active slot: 1 00:07:37.524 Slot 1 Firmware Revision: 1.0 00:07:37.524 00:07:37.524 00:07:37.524 Commands Supported and Effects 00:07:37.524 ============================== 00:07:37.524 Admin Commands 00:07:37.524 -------------- 00:07:37.524 Delete I/O Submission Queue (00h): Supported 00:07:37.524 Create I/O Submission Queue (01h): Supported 00:07:37.524 Get Log Page (02h): Supported 00:07:37.524 Delete I/O Completion Queue (04h): Supported 00:07:37.524 Create I/O Completion Queue (05h): Supported 00:07:37.524 Identify (06h): Supported 00:07:37.524 Abort (08h): Supported 00:07:37.524 Set Features (09h): Supported 00:07:37.524 Get Features (0Ah): Supported 00:07:37.524 Asynchronous Event Request (0Ch): Supported 00:07:37.524 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.524 Directive Send (19h): Supported 00:07:37.524 Directive Receive (1Ah): Supported 00:07:37.524 Virtualization Management (1Ch): Supported 00:07:37.524 Doorbell Buffer Config (7Ch): Supported 00:07:37.524 Format NVM (80h): Supported LBA-Change 00:07:37.524 I/O Commands 00:07:37.524 ------------ 00:07:37.524 Flush (00h): Supported LBA-Change 00:07:37.524 Write (01h): Supported LBA-Change 00:07:37.524 Read (02h): Supported 00:07:37.524 Compare (05h): Supported 00:07:37.524 Write Zeroes (08h): Supported LBA-Change 00:07:37.524 Dataset Management (09h): Supported LBA-Change 00:07:37.524 Unknown (0Ch): Supported 00:07:37.524 Unknown (12h): Supported 00:07:37.524 Copy (19h): Supported LBA-Change 00:07:37.524 Unknown (1Dh): Supported LBA-Change 00:07:37.524 00:07:37.524 Error Log 00:07:37.524 ========= 00:07:37.524 00:07:37.524 Arbitration 00:07:37.524 =========== 00:07:37.524 Arbitration Burst: no limit 00:07:37.524 00:07:37.524 Power Management 00:07:37.524 ================ 00:07:37.524 Number of Power States: 1 00:07:37.524 Current Power State: Power State #0 00:07:37.524 Power State #0: 00:07:37.524 Max Power: 25.00 W 00:07:37.524 Non-Operational State: Operational 00:07:37.524 Entry Latency: 16 microseconds 00:07:37.524 Exit Latency: 4 microseconds 00:07:37.524 Relative Read Throughput: 0 00:07:37.524 Relative Read Latency: 0 00:07:37.524 Relative Write Throughput: 0 00:07:37.524 Relative Write Latency: 0 00:07:37.524 Idle Power: Not Reported 00:07:37.524 Active Power: Not Reported 00:07:37.524 Non-Operational Permissive Mode: Not Supported 00:07:37.524 00:07:37.524 Health Information 00:07:37.524 ================== 00:07:37.524 Critical Warnings: 00:07:37.524 Available Spare Space: OK 00:07:37.524 Temperature: OK 00:07:37.524 Device Reliability: OK 00:07:37.524 Read Only: No 00:07:37.524 Volatile Memory Backup: OK 00:07:37.524 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.524 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.524 Available Spare: 0% 00:07:37.524 Available Spare Threshold: 0% 00:07:37.524 Life Percentage Used: 0% 00:07:37.524 Data Units Read: 834 00:07:37.524 Data Units Written: 763 00:07:37.524 Host Read Commands: 39173 00:07:37.524 Host Write Commands: 38596 00:07:37.524 Controller Busy Time: 0 minutes 00:07:37.525 Power Cycles: 0 00:07:37.525 Power On Hours: 0 hours 00:07:37.525 Unsafe Shutdowns: 0 00:07:37.525 Unrecoverable Media Errors: 0 00:07:37.525 Lifetime Error Log Entries: 0 00:07:37.525 Warning Temperature Time: 0 minutes 00:07:37.525 Critical Temperature Time: 0 minutes 00:07:37.525 00:07:37.525 Number of Queues 00:07:37.525 ================ 00:07:37.525 Number of I/O Submission Queues: 64 00:07:37.525 Number of I/O Completion Queues: 64 00:07:37.525 00:07:37.525 ZNS Specific Controller Data 00:07:37.525 ============================ 00:07:37.525 Zone Append Size Limit: 0 00:07:37.525 00:07:37.525 00:07:37.525 Active Namespaces 00:07:37.525 ================= 00:07:37.525 Namespace ID:1 00:07:37.525 Error Recovery Timeout: Unlimited 00:07:37.525 Command Set Identifier: NVM (00h) 00:07:37.525 Deallocate: Supported 00:07:37.525 Deallocated/Unwritten Error: Supported 00:07:37.525 Deallocated Read Value: All 0x00 00:07:37.525 Deallocate in Write Zeroes: Not Supported 00:07:37.525 Deallocated Guard Field: 0xFFFF 00:07:37.525 Flush: Supported 00:07:37.525 Reservation: Not Supported 00:07:37.525 Namespace Sharing Capabilities: Multiple Controllers 00:07:37.525 Size (in LBAs): 262144 (1GiB) 00:07:37.525 Capacity (in LBAs): 262144 (1GiB) 00:07:37.525 Utilization (in LBAs): 262144 (1GiB) 00:07:37.525 Thin Provisioning: Not Supported 00:07:37.525 Per-NS Atomic Units: No 00:07:37.525 Maximum Single Source Range Length: 128 00:07:37.525 Maximum Copy Length: 128 00:07:37.525 Maximum Source Range Count: 128 00:07:37.525 NGUID/EUI64 Never Reused: No 00:07:37.525 Namespace Write Protected: No 00:07:37.525 Endurance group ID: 1 00:07:37.525 Number of LBA Formats: 8 00:07:37.525 Current LBA Format: LBA Format #04 00:07:37.525 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.525 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.525 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.525 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.525 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.525 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.525 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.525 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.525 00:07:37.525 Get Feature FDP: 00:07:37.525 ================ 00:07:37.525 Enabled: Yes 00:07:37.525 FDP configuration index: 0 00:07:37.525 00:07:37.525 FDP configurations log page 00:07:37.525 =========================== 00:07:37.525 Number of FDP configurations: 1 00:07:37.525 Version: 0 00:07:37.525 Size: 112 00:07:37.525 FDP Configuration Descriptor: 0 00:07:37.525 Descriptor Size: 96 00:07:37.525 Reclaim Group Identifier format: 2 00:07:37.525 FDP Volatile Write Cache: Not Present 00:07:37.525 FDP Configuration: Valid 00:07:37.525 Vendor Specific Size: 0 00:07:37.525 Number of Reclaim Groups: 2 00:07:37.525 Number of Recalim Unit Handles: 8 00:07:37.525 Max Placement Identifiers: 128 00:07:37.525 Number of Namespaces Suppprted: 256 00:07:37.525 Reclaim unit Nominal Size: 6000000 bytes 00:07:37.525 Estimated Reclaim Unit Time Limit: Not Reported 00:07:37.525 RUH Desc #000: RUH Type: Initially Isolated 00:07:37.525 RUH Desc #001: RUH Type: Initially Isolated 00:07:37.525 RUH Desc #002: RUH Type: Initially Isolated 00:07:37.525 RUH Desc #003: RUH Type: Initially Isolated 00:07:37.525 RUH Desc #004: RUH Type: Initially Isolated 00:07:37.525 RUH Desc #005: RUH Type: Initially Isolated 00:07:37.525 RUH Desc #006: RUH Type: Initially Isolated 00:07:37.525 RUH Desc #007: RUH Type: Initially Isolated 00:07:37.525 00:07:37.525 FDP reclaim unit handle usage log page 00:07:37.525 ==================================[2024-10-13 03:58:30.603168] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63177 terminated unexpected 00:07:37.525 ==== 00:07:37.525 Number of Reclaim Unit Handles: 8 00:07:37.525 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:37.525 RUH Usage Desc #001: RUH Attributes: Unused 00:07:37.525 RUH Usage Desc #002: RUH Attributes: Unused 00:07:37.525 RUH Usage Desc #003: RUH Attributes: Unused 00:07:37.525 RUH Usage Desc #004: RUH Attributes: Unused 00:07:37.525 RUH Usage Desc #005: RUH Attributes: Unused 00:07:37.525 RUH Usage Desc #006: RUH Attributes: Unused 00:07:37.525 RUH Usage Desc #007: RUH Attributes: Unused 00:07:37.525 00:07:37.525 FDP statistics log page 00:07:37.525 ======================= 00:07:37.525 Host bytes with metadata written: 481402880 00:07:37.525 Media bytes with metadata written: 481447936 00:07:37.525 Media bytes erased: 0 00:07:37.525 00:07:37.525 FDP events log page 00:07:37.525 =================== 00:07:37.525 Number of FDP events: 0 00:07:37.525 00:07:37.525 NVM Specific Namespace Data 00:07:37.525 =========================== 00:07:37.525 Logical Block Storage Tag Mask: 0 00:07:37.525 Protection Information Capabilities: 00:07:37.525 16b Guard Protection Information Storage Tag Support: No 00:07:37.525 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.525 Storage Tag Check Read Support: No 00:07:37.525 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.525 ===================================================== 00:07:37.525 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:37.525 ===================================================== 00:07:37.525 Controller Capabilities/Features 00:07:37.525 ================================ 00:07:37.525 Vendor ID: 1b36 00:07:37.525 Subsystem Vendor ID: 1af4 00:07:37.525 Serial Number: 12340 00:07:37.525 Model Number: QEMU NVMe Ctrl 00:07:37.525 Firmware Version: 8.0.0 00:07:37.525 Recommended Arb Burst: 6 00:07:37.525 IEEE OUI Identifier: 00 54 52 00:07:37.525 Multi-path I/O 00:07:37.526 May have multiple subsystem ports: No 00:07:37.526 May have multiple controllers: No 00:07:37.526 Associated with SR-IOV VF: No 00:07:37.526 Max Data Transfer Size: 524288 00:07:37.526 Max Number of Namespaces: 256 00:07:37.526 Max Number of I/O Queues: 64 00:07:37.526 NVMe Specification Version (VS): 1.4 00:07:37.526 NVMe Specification Version (Identify): 1.4 00:07:37.526 Maximum Queue Entries: 2048 00:07:37.526 Contiguous Queues Required: Yes 00:07:37.526 Arbitration Mechanisms Supported 00:07:37.526 Weighted Round Robin: Not Supported 00:07:37.526 Vendor Specific: Not Supported 00:07:37.526 Reset Timeout: 7500 ms 00:07:37.526 Doorbell Stride: 4 bytes 00:07:37.526 NVM Subsystem Reset: Not Supported 00:07:37.526 Command Sets Supported 00:07:37.526 NVM Command Set: Supported 00:07:37.526 Boot Partition: Not Supported 00:07:37.526 Memory Page Size Minimum: 4096 bytes 00:07:37.526 Memory Page Size Maximum: 65536 bytes 00:07:37.526 Persistent Memory Region: Not Supported 00:07:37.526 Optional Asynchronous Events Supported 00:07:37.526 Namespace Attribute Notices: Supported 00:07:37.526 Firmware Activation Notices: Not Supported 00:07:37.526 ANA Change Notices: Not Supported 00:07:37.526 PLE Aggregate Log Change Notices: Not Supported 00:07:37.526 LBA Status Info Alert Notices: Not Supported 00:07:37.526 EGE Aggregate Log Change Notices: Not Supported 00:07:37.526 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.526 Zone Descriptor Change Notices: Not Supported 00:07:37.526 Discovery Log Change Notices: Not Supported 00:07:37.526 Controller Attributes 00:07:37.526 128-bit Host Identifier: Not Supported 00:07:37.526 Non-Operational Permissive Mode: Not Supported 00:07:37.526 NVM Sets: Not Supported 00:07:37.526 Read Recovery Levels: Not Supported 00:07:37.526 Endurance Groups: Not Supported 00:07:37.526 Predictable Latency Mode: Not Supported 00:07:37.526 Traffic Based Keep ALive: Not Supported 00:07:37.526 Namespace Granularity: Not Supported 00:07:37.526 SQ Associations: Not Supported 00:07:37.526 UUID List: Not Supported 00:07:37.526 Multi-Domain Subsystem: Not Supported 00:07:37.526 Fixed Capacity Management: Not Supported 00:07:37.526 Variable Capacity Management: Not Supported 00:07:37.526 Delete Endurance Group: Not Supported 00:07:37.526 Delete NVM Set: Not Supported 00:07:37.526 Extended LBA Formats Supported: Supported 00:07:37.526 Flexible Data Placement Supported: Not Supported 00:07:37.526 00:07:37.526 Controller Memory Buffer Support 00:07:37.526 ================================ 00:07:37.526 Supported: No 00:07:37.526 00:07:37.526 Persistent Memory Region Support 00:07:37.526 ================================ 00:07:37.526 Supported: No 00:07:37.526 00:07:37.526 Admin Command Set Attributes 00:07:37.526 ============================ 00:07:37.526 Security Send/Receive: Not Supported 00:07:37.526 Format NVM: Supported 00:07:37.526 Firmware Activate/Download: Not Supported 00:07:37.526 Namespace Management: Supported 00:07:37.526 Device Self-Test: Not Supported 00:07:37.526 Directives: Supported 00:07:37.526 NVMe-MI: Not Supported 00:07:37.526 Virtualization Management: Not Supported 00:07:37.526 Doorbell Buffer Config: Supported 00:07:37.526 Get LBA Status Capability: Not Supported 00:07:37.526 Command & Feature Lockdown Capability: Not Supported 00:07:37.526 Abort Command Limit: 4 00:07:37.526 Async Event Request Limit: 4 00:07:37.526 Number of Firmware Slots: N/A 00:07:37.526 Firmware Slot 1 Read-Only: N/A 00:07:37.526 Firmware Activation Without Reset: N/A 00:07:37.526 Multiple Update Detection Support: N/A 00:07:37.526 Firmware Update Granularity: No Information Provided 00:07:37.526 Per-Namespace SMART Log: Yes 00:07:37.526 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.526 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:37.526 Command Effects Log Page: Supported 00:07:37.526 Get Log Page Extended Data: Supported 00:07:37.526 Telemetry Log Pages: Not Supported 00:07:37.526 Persistent Event Log Pages: Not Supported 00:07:37.526 Supported Log Pages Log Page: May Support 00:07:37.526 Commands Supported & Effects Log Page: Not Supported 00:07:37.526 Feature Identifiers & Effects Log Page:May Support 00:07:37.526 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.526 Data Area 4 for Telemetry Log: Not Supported 00:07:37.526 Error Log Page Entries Supported: 1 00:07:37.526 Keep Alive: Not Supported 00:07:37.526 00:07:37.526 NVM Command Set Attributes 00:07:37.526 ========================== 00:07:37.526 Submission Queue Entry Size 00:07:37.526 Max: 64 00:07:37.526 Min: 64 00:07:37.526 Completion Queue Entry Size 00:07:37.526 Max: 16 00:07:37.526 Min: 16 00:07:37.526 Number of Namespaces: 256 00:07:37.526 Compare Command: Supported 00:07:37.526 Write Uncorrectable Command: Not Supported 00:07:37.526 Dataset Management Command: Supported 00:07:37.526 Write Zeroes Command: Supported 00:07:37.526 Set Features Save Field: Supported 00:07:37.526 Reservations: Not Supported 00:07:37.526 Timestamp: Supported 00:07:37.526 Copy: Supported 00:07:37.526 Volatile Write Cache: Present 00:07:37.526 Atomic Write Unit (Normal): 1 00:07:37.526 Atomic Write Unit (PFail): 1 00:07:37.526 Atomic Compare & Write Unit: 1 00:07:37.526 Fused Compare & Write: Not Supported 00:07:37.526 Scatter-Gather List 00:07:37.526 SGL Command Set: Supported 00:07:37.526 SGL Keyed: Not Supported 00:07:37.526 SGL Bit Bucket Descriptor: Not Supported 00:07:37.526 SGL Metadata Pointer: Not Supported 00:07:37.526 Oversized SGL: Not Supported 00:07:37.526 SGL Metadata Address: Not Supported 00:07:37.526 SGL Offset: Not Supported 00:07:37.526 Transport SGL Data Block: Not Supported 00:07:37.526 Replay Protected Memory Block: Not Supported 00:07:37.526 00:07:37.526 Firmware Slot Information 00:07:37.526 ========================= 00:07:37.526 Active slot: 1 00:07:37.526 Slot 1 Firmware Revision: 1.0 00:07:37.526 00:07:37.526 00:07:37.526 Commands Supported and Effects 00:07:37.526 ============================== 00:07:37.526 Admin Commands 00:07:37.526 -------------- 00:07:37.527 Delete I/O Submission Queue (00h): Supported 00:07:37.527 Create I/O Submission Queue (01h): Supported 00:07:37.527 Get Log Page (02h): Supported 00:07:37.527 Delete I/O Completion Queue (04h): Supported 00:07:37.527 Create I/O Completion Queue (05h): Supported 00:07:37.527 Identify (06h): Supported 00:07:37.527 Abort (08h): Supported 00:07:37.527 Set Features (09h): Supported 00:07:37.527 Get Features (0Ah): Supported 00:07:37.527 Asynchronous Event Request (0Ch): Supported 00:07:37.527 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.527 Directive Send (19h): Supported 00:07:37.527 Directive Receive (1Ah): Supported 00:07:37.527 Virtualization Management (1Ch): Supported 00:07:37.527 Doorbell Buffer Config (7Ch): Supported 00:07:37.527 Format NVM (80h): Supported LBA-Change 00:07:37.527 I/O Commands 00:07:37.527 ------------ 00:07:37.527 Flush (00h): Supported LBA-Change 00:07:37.527 Write (01h): Supported LBA-Change 00:07:37.527 Read (02h): Supported 00:07:37.527 Compare (05h): Supported 00:07:37.527 Write Zeroes (08h): Supported LBA-Change 00:07:37.527 Dataset Management (09h): Supported LBA-Change 00:07:37.527 Unknown (0Ch): Supported 00:07:37.527 Unknown (12h): Supported 00:07:37.527 Copy (19h): Supported LBA-Change 00:07:37.527 Unknown (1Dh): Supported LBA-Change 00:07:37.527 00:07:37.527 Error Log 00:07:37.527 ========= 00:07:37.527 00:07:37.527 Arbitration 00:07:37.527 =========== 00:07:37.527 Arbitration Burst: no limit 00:07:37.527 00:07:37.527 Power Management 00:07:37.527 ================ 00:07:37.527 Number of Power States: 1 00:07:37.527 Current Power State: Power State #0 00:07:37.527 Power State #0: 00:07:37.527 Max Power: 25.00 W 00:07:37.527 Non-Operational State: Operational 00:07:37.527 Entry Latency: 16 microseconds 00:07:37.527 Exit Latency: 4 microseconds 00:07:37.527 Relative Read Throughput: 0 00:07:37.527 Relative Read Latency: 0 00:07:37.527 Relative Write Throughput: 0 00:07:37.527 Relative Write Latency: 0 00:07:37.527 Idle Power: Not Reported 00:07:37.527 Active Power: Not Reported 00:07:37.527 Non-Operational Permissive Mode: Not Supported 00:07:37.527 00:07:37.527 Health Information 00:07:37.527 ================== 00:07:37.527 Critical Warnings: 00:07:37.527 Available Spare Space: OK 00:07:37.527 Temperature: OK 00:07:37.527 Device Reliability: OK 00:07:37.527 Read Only: No 00:07:37.527 Volatile Memory Backup: OK 00:07:37.527 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.527 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.527 Available Spare: 0% 00:07:37.527 Available Spare Threshold: 0% 00:07:37.527 Life Percentage Used: 0% 00:07:37.527 Data Units Read: 694 00:07:37.527 Data Units Written: 622 00:07:37.527 Host Read Commands: 37477 00:07:37.527 Host Write Commands: 37263 00:07:37.527 Controller Busy Time: 0 minutes 00:07:37.527 Power Cycles: 0 00:07:37.527 Power On Hours: 0 hours 00:07:37.527 Unsafe Shutdowns: 0 00:07:37.527 Unrecoverable Media Errors: 0 00:07:37.527 Lifetime Error Log Entries: 0 00:07:37.527 Warning Temperature Time: 0 minutes 00:07:37.527 Critical Temperature Time: 0 minutes 00:07:37.527 00:07:37.527 Number of Queues 00:07:37.527 ================ 00:07:37.527 Number of I/O Submission Queues: 64 00:07:37.527 Number of I/O Completion Queues: 64 00:07:37.527 00:07:37.527 ZNS Specific Controller Data 00:07:37.527 ============================ 00:07:37.527 Zone Append Size Limit: 0 00:07:37.527 00:07:37.527 00:07:37.527 Active Namespaces 00:07:37.527 ================= 00:07:37.527 Namespace ID:1 00:07:37.527 Error Recovery Timeout: Unlimited 00:07:37.527 Command Set Identifier: NVM (00h) 00:07:37.527 Deallocate: Supported 00:07:37.527 Deallocated/Unwritten Error: Supported 00:07:37.527 Deallocated Read Value: All 0x00 00:07:37.527 Deallocate in Write Zeroes: Not Supported 00:07:37.527 Deallocated Guard Field: 0xFFFF 00:07:37.527 Flush: Supported 00:07:37.527 Reservation: Not Supported 00:07:37.527 Metadata Transferred as: Separate Metadata Buffer 00:07:37.527 Namespace Sharing Capabilities: Private 00:07:37.527 Size (in LBAs): 1548666 (5GiB) 00:07:37.527 Capacity (in LBAs): 1548666 (5GiB) 00:07:37.527 Utilization (in LBAs): 1548666 (5GiB) 00:07:37.527 Thin Provisioning: Not Supported 00:07:37.527 Per-NS Atomic Units: No 00:07:37.527 Maximum Single Source Range Length: 128 00:07:37.527 Maximum Copy Length: 128 00:07:37.527 Maximum Source Range Count: 128 00:07:37.527 NGUID/EUI64 Never Reused: No 00:07:37.527 Namespace Write Protected: No 00:07:37.527 Number of LBA Formats: 8 00:07:37.527 Current LBA Format: [2024-10-13 03:58:30.605529] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63177 terminated unexpected 00:07:37.527 LBA Format #07 00:07:37.527 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.527 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.527 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.527 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.527 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.527 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.527 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.527 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.527 00:07:37.527 NVM Specific Namespace Data 00:07:37.527 =========================== 00:07:37.527 Logical Block Storage Tag Mask: 0 00:07:37.527 Protection Information Capabilities: 00:07:37.527 16b Guard Protection Information Storage Tag Support: No 00:07:37.527 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.527 Storage Tag Check Read Support: No 00:07:37.527 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.527 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.527 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.527 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.527 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.527 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.528 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.528 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.528 ===================================================== 00:07:37.528 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:37.528 ===================================================== 00:07:37.528 Controller Capabilities/Features 00:07:37.528 ================================ 00:07:37.528 Vendor ID: 1b36 00:07:37.528 Subsystem Vendor ID: 1af4 00:07:37.528 Serial Number: 12341 00:07:37.528 Model Number: QEMU NVMe Ctrl 00:07:37.528 Firmware Version: 8.0.0 00:07:37.528 Recommended Arb Burst: 6 00:07:37.528 IEEE OUI Identifier: 00 54 52 00:07:37.528 Multi-path I/O 00:07:37.528 May have multiple subsystem ports: No 00:07:37.528 May have multiple controllers: No 00:07:37.528 Associated with SR-IOV VF: No 00:07:37.528 Max Data Transfer Size: 524288 00:07:37.528 Max Number of Namespaces: 256 00:07:37.528 Max Number of I/O Queues: 64 00:07:37.528 NVMe Specification Version (VS): 1.4 00:07:37.528 NVMe Specification Version (Identify): 1.4 00:07:37.528 Maximum Queue Entries: 2048 00:07:37.528 Contiguous Queues Required: Yes 00:07:37.528 Arbitration Mechanisms Supported 00:07:37.528 Weighted Round Robin: Not Supported 00:07:37.528 Vendor Specific: Not Supported 00:07:37.528 Reset Timeout: 7500 ms 00:07:37.528 Doorbell Stride: 4 bytes 00:07:37.528 NVM Subsystem Reset: Not Supported 00:07:37.528 Command Sets Supported 00:07:37.528 NVM Command Set: Supported 00:07:37.528 Boot Partition: Not Supported 00:07:37.528 Memory Page Size Minimum: 4096 bytes 00:07:37.528 Memory Page Size Maximum: 65536 bytes 00:07:37.528 Persistent Memory Region: Not Supported 00:07:37.528 Optional Asynchronous Events Supported 00:07:37.528 Namespace Attribute Notices: Supported 00:07:37.528 Firmware Activation Notices: Not Supported 00:07:37.528 ANA Change Notices: Not Supported 00:07:37.528 PLE Aggregate Log Change Notices: Not Supported 00:07:37.528 LBA Status Info Alert Notices: Not Supported 00:07:37.528 EGE Aggregate Log Change Notices: Not Supported 00:07:37.528 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.528 Zone Descriptor Change Notices: Not Supported 00:07:37.528 Discovery Log Change Notices: Not Supported 00:07:37.528 Controller Attributes 00:07:37.528 128-bit Host Identifier: Not Supported 00:07:37.528 Non-Operational Permissive Mode: Not Supported 00:07:37.528 NVM Sets: Not Supported 00:07:37.528 Read Recovery Levels: Not Supported 00:07:37.528 Endurance Groups: Not Supported 00:07:37.528 Predictable Latency Mode: Not Supported 00:07:37.528 Traffic Based Keep ALive: Not Supported 00:07:37.528 Namespace Granularity: Not Supported 00:07:37.528 SQ Associations: Not Supported 00:07:37.528 UUID List: Not Supported 00:07:37.528 Multi-Domain Subsystem: Not Supported 00:07:37.528 Fixed Capacity Management: Not Supported 00:07:37.528 Variable Capacity Management: Not Supported 00:07:37.528 Delete Endurance Group: Not Supported 00:07:37.528 Delete NVM Set: Not Supported 00:07:37.528 Extended LBA Formats Supported: Supported 00:07:37.528 Flexible Data Placement Supported: Not Supported 00:07:37.528 00:07:37.528 Controller Memory Buffer Support 00:07:37.528 ================================ 00:07:37.528 Supported: No 00:07:37.528 00:07:37.528 Persistent Memory Region Support 00:07:37.528 ================================ 00:07:37.528 Supported: No 00:07:37.528 00:07:37.528 Admin Command Set Attributes 00:07:37.528 ============================ 00:07:37.528 Security Send/Receive: Not Supported 00:07:37.528 Format NVM: Supported 00:07:37.528 Firmware Activate/Download: Not Supported 00:07:37.528 Namespace Management: Supported 00:07:37.528 Device Self-Test: Not Supported 00:07:37.528 Directives: Supported 00:07:37.528 NVMe-MI: Not Supported 00:07:37.528 Virtualization Management: Not Supported 00:07:37.528 Doorbell Buffer Config: Supported 00:07:37.528 Get LBA Status Capability: Not Supported 00:07:37.528 Command & Feature Lockdown Capability: Not Supported 00:07:37.528 Abort Command Limit: 4 00:07:37.528 Async Event Request Limit: 4 00:07:37.528 Number of Firmware Slots: N/A 00:07:37.528 Firmware Slot 1 Read-Only: N/A 00:07:37.528 Firmware Activation Without Reset: N/A 00:07:37.528 Multiple Update Detection Support: N/A 00:07:37.528 Firmware Update Granularity: No Information Provided 00:07:37.528 Per-Namespace SMART Log: Yes 00:07:37.528 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.528 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:37.528 Command Effects Log Page: Supported 00:07:37.528 Get Log Page Extended Data: Supported 00:07:37.528 Telemetry Log Pages: Not Supported 00:07:37.528 Persistent Event Log Pages: Not Supported 00:07:37.528 Supported Log Pages Log Page: May Support 00:07:37.528 Commands Supported & Effects Log Page: Not Supported 00:07:37.528 Feature Identifiers & Effects Log Page:May Support 00:07:37.528 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.528 Data Area 4 for Telemetry Log: Not Supported 00:07:37.528 Error Log Page Entries Supported: 1 00:07:37.528 Keep Alive: Not Supported 00:07:37.528 00:07:37.528 NVM Command Set Attributes 00:07:37.528 ========================== 00:07:37.528 Submission Queue Entry Size 00:07:37.528 Max: 64 00:07:37.528 Min: 64 00:07:37.528 Completion Queue Entry Size 00:07:37.528 Max: 16 00:07:37.528 Min: 16 00:07:37.528 Number of Namespaces: 256 00:07:37.528 Compare Command: Supported 00:07:37.528 Write Uncorrectable Command: Not Supported 00:07:37.528 Dataset Management Command: Supported 00:07:37.528 Write Zeroes Command: Supported 00:07:37.528 Set Features Save Field: Supported 00:07:37.528 Reservations: Not Supported 00:07:37.528 Timestamp: Supported 00:07:37.528 Copy: Supported 00:07:37.528 Volatile Write Cache: Present 00:07:37.528 Atomic Write Unit (Normal): 1 00:07:37.528 Atomic Write Unit (PFail): 1 00:07:37.528 Atomic Compare & Write Unit: 1 00:07:37.528 Fused Compare & Write: Not Supported 00:07:37.528 Scatter-Gather List 00:07:37.528 SGL Command Set: Supported 00:07:37.528 SGL Keyed: Not Supported 00:07:37.529 SGL Bit Bucket Descriptor: Not Supported 00:07:37.529 SGL Metadata Pointer: Not Supported 00:07:37.529 Oversized SGL: Not Supported 00:07:37.529 SGL Metadata Address: Not Supported 00:07:37.529 SGL Offset: Not Supported 00:07:37.529 Transport SGL Data Block: Not Supported 00:07:37.529 Replay Protected Memory Block: Not Supported 00:07:37.529 00:07:37.529 Firmware Slot Information 00:07:37.529 ========================= 00:07:37.529 Active slot: 1 00:07:37.529 Slot 1 Firmware Revision: 1.0 00:07:37.529 00:07:37.529 00:07:37.529 Commands Supported and Effects 00:07:37.529 ============================== 00:07:37.529 Admin Commands 00:07:37.529 -------------- 00:07:37.529 Delete I/O Submission Queue (00h): Supported 00:07:37.529 Create I/O Submission Queue (01h): Supported 00:07:37.529 Get Log Page (02h): Supported 00:07:37.529 Delete I/O Completion Queue (04h): Supported 00:07:37.529 Create I/O Completion Queue (05h): Supported 00:07:37.529 Identify (06h): Supported 00:07:37.529 Abort (08h): Supported 00:07:37.529 Set Features (09h): Supported 00:07:37.529 Get Features (0Ah): Supported 00:07:37.529 Asynchronous Event Request (0Ch): Supported 00:07:37.529 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.529 Directive Send (19h): Supported 00:07:37.529 Directive Receive (1Ah): Supported 00:07:37.529 Virtualization Management (1Ch): Supported 00:07:37.529 Doorbell Buffer Config (7Ch): Supported 00:07:37.529 Format NVM (80h): Supported LBA-Change 00:07:37.529 I/O Commands 00:07:37.529 ------------ 00:07:37.529 Flush (00h): Supported LBA-Change 00:07:37.529 Write (01h): Supported LBA-Change 00:07:37.529 Read (02h): Supported 00:07:37.529 Compare (05h): Supported 00:07:37.529 Write Zeroes (08h): Supported LBA-Change 00:07:37.529 Dataset Management (09h): Supported LBA-Change 00:07:37.529 Unknown (0Ch): Supported 00:07:37.529 Unknown (12h): Supported 00:07:37.529 Copy (19h): Supported LBA-Change 00:07:37.529 Unknown (1Dh): Supported LBA-Change 00:07:37.529 00:07:37.529 Error Log 00:07:37.529 ========= 00:07:37.529 00:07:37.529 Arbitration 00:07:37.529 =========== 00:07:37.529 Arbitration Burst: no limit 00:07:37.529 00:07:37.529 Power Management 00:07:37.529 ================ 00:07:37.529 Number of Power States: 1 00:07:37.529 Current Power State: Power State #0 00:07:37.529 Power State #0: 00:07:37.529 Max Power: 25.00 W 00:07:37.529 Non-Operational State: Operational 00:07:37.529 Entry Latency: 16 microseconds 00:07:37.529 Exit Latency: 4 microseconds 00:07:37.529 Relative Read Throughput: 0 00:07:37.529 Relative Read Latency: 0 00:07:37.529 Relative Write Throughput: 0 00:07:37.529 Relative Write Latency: 0 00:07:37.529 Idle Power: Not Reported 00:07:37.529 Active Power: Not Reported 00:07:37.529 Non-Operational Permissive Mode: Not Supported 00:07:37.529 00:07:37.529 Health Information 00:07:37.529 ================== 00:07:37.529 Critical Warnings: 00:07:37.529 Available Spare Space: OK 00:07:37.529 Temperature: OK 00:07:37.529 Device Reliability: OK 00:07:37.529 Read Only: No 00:07:37.529 Volatile Memory Backup: OK 00:07:37.529 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.529 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.529 Available Spare: 0% 00:07:37.529 Available Spare Threshold: 0% 00:07:37.529 Life Percentage Used: 0% 00:07:37.529 Data Units Read: 1086 00:07:37.529 Data Units Written: 952 00:07:37.529 Host Read Commands: 57024 00:07:37.529 Host Write Commands: 55811 00:07:37.529 Controller Busy Time: 0 minutes 00:07:37.529 Power Cycles: 0 00:07:37.529 Power On Hours: 0 hours 00:07:37.529 Unsafe Shutdowns: 0 00:07:37.529 Unrecoverable Media Errors: 0 00:07:37.529 Lifetime Error Log Entries: 0 00:07:37.529 Warning Temperature Time: 0 minutes 00:07:37.529 Critical Temperature Time: 0 minutes 00:07:37.529 00:07:37.529 Number of Queues 00:07:37.529 ================ 00:07:37.529 Number of I/O Submission Queues: 64 00:07:37.529 Number of I/O Completion Queues: 64 00:07:37.529 00:07:37.529 ZNS Specific Controller Data 00:07:37.529 ============================ 00:07:37.529 Zone Append Size Limit: 0 00:07:37.529 00:07:37.529 00:07:37.529 Active Namespaces 00:07:37.529 ================= 00:07:37.529 Namespace ID:1 00:07:37.529 Error Recovery Timeout: Unlimited 00:07:37.529 Command Set Identifier: NVM (00h) 00:07:37.529 Deallocate: Supported 00:07:37.529 Deallocated/Unwritten Error: Supported 00:07:37.529 Deallocated Read Value: All 0x00 00:07:37.529 Deallocate in Write Zeroes: Not Supported 00:07:37.529 Deallocated Guard Field: 0xFFFF 00:07:37.529 Flush: Supported 00:07:37.529 Reservation: Not Supported 00:07:37.529 Namespace Sharing Capabilities: Private 00:07:37.529 Size (in LBAs): 1310720 (5GiB) 00:07:37.529 Capacity (in LBAs): 1310720 (5GiB) 00:07:37.529 Utilization (in LBAs): 1310720 (5GiB) 00:07:37.529 Thin Provisioning: Not Supported 00:07:37.529 Per-NS Atomic Units: No 00:07:37.529 Maximum Single Source Range Length: 128 00:07:37.529 Maximum Copy Length: 128 00:07:37.529 Maximum Source Range Count: 128 00:07:37.529 NGUID/EUI64 Never Reused: No 00:07:37.529 Namespace Write Protected: No 00:07:37.529 Number of LBA Formats: 8 00:07:37.529 Current LBA Format: LBA Format #04 00:07:37.529 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.529 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.529 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.529 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.529 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.529 LBA Forma[2024-10-13 03:58:30.606713] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63177 terminated unexpected 00:07:37.529 t #05: Data Size: 4096 Metadata Size: 8 00:07:37.529 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.529 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.529 00:07:37.529 NVM Specific Namespace Data 00:07:37.529 =========================== 00:07:37.530 Logical Block Storage Tag Mask: 0 00:07:37.530 Protection Information Capabilities: 00:07:37.530 16b Guard Protection Information Storage Tag Support: No 00:07:37.530 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.530 Storage Tag Check Read Support: No 00:07:37.530 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.530 ===================================================== 00:07:37.530 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:37.530 ===================================================== 00:07:37.530 Controller Capabilities/Features 00:07:37.530 ================================ 00:07:37.530 Vendor ID: 1b36 00:07:37.530 Subsystem Vendor ID: 1af4 00:07:37.530 Serial Number: 12342 00:07:37.530 Model Number: QEMU NVMe Ctrl 00:07:37.530 Firmware Version: 8.0.0 00:07:37.530 Recommended Arb Burst: 6 00:07:37.530 IEEE OUI Identifier: 00 54 52 00:07:37.530 Multi-path I/O 00:07:37.530 May have multiple subsystem ports: No 00:07:37.530 May have multiple controllers: No 00:07:37.530 Associated with SR-IOV VF: No 00:07:37.530 Max Data Transfer Size: 524288 00:07:37.530 Max Number of Namespaces: 256 00:07:37.530 Max Number of I/O Queues: 64 00:07:37.530 NVMe Specification Version (VS): 1.4 00:07:37.530 NVMe Specification Version (Identify): 1.4 00:07:37.530 Maximum Queue Entries: 2048 00:07:37.530 Contiguous Queues Required: Yes 00:07:37.530 Arbitration Mechanisms Supported 00:07:37.530 Weighted Round Robin: Not Supported 00:07:37.530 Vendor Specific: Not Supported 00:07:37.530 Reset Timeout: 7500 ms 00:07:37.530 Doorbell Stride: 4 bytes 00:07:37.530 NVM Subsystem Reset: Not Supported 00:07:37.530 Command Sets Supported 00:07:37.530 NVM Command Set: Supported 00:07:37.530 Boot Partition: Not Supported 00:07:37.530 Memory Page Size Minimum: 4096 bytes 00:07:37.530 Memory Page Size Maximum: 65536 bytes 00:07:37.530 Persistent Memory Region: Not Supported 00:07:37.530 Optional Asynchronous Events Supported 00:07:37.530 Namespace Attribute Notices: Supported 00:07:37.530 Firmware Activation Notices: Not Supported 00:07:37.530 ANA Change Notices: Not Supported 00:07:37.530 PLE Aggregate Log Change Notices: Not Supported 00:07:37.530 LBA Status Info Alert Notices: Not Supported 00:07:37.530 EGE Aggregate Log Change Notices: Not Supported 00:07:37.530 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.530 Zone Descriptor Change Notices: Not Supported 00:07:37.530 Discovery Log Change Notices: Not Supported 00:07:37.530 Controller Attributes 00:07:37.530 128-bit Host Identifier: Not Supported 00:07:37.530 Non-Operational Permissive Mode: Not Supported 00:07:37.530 NVM Sets: Not Supported 00:07:37.530 Read Recovery Levels: Not Supported 00:07:37.530 Endurance Groups: Not Supported 00:07:37.530 Predictable Latency Mode: Not Supported 00:07:37.530 Traffic Based Keep ALive: Not Supported 00:07:37.530 Namespace Granularity: Not Supported 00:07:37.530 SQ Associations: Not Supported 00:07:37.530 UUID List: Not Supported 00:07:37.530 Multi-Domain Subsystem: Not Supported 00:07:37.530 Fixed Capacity Management: Not Supported 00:07:37.530 Variable Capacity Management: Not Supported 00:07:37.530 Delete Endurance Group: Not Supported 00:07:37.530 Delete NVM Set: Not Supported 00:07:37.530 Extended LBA Formats Supported: Supported 00:07:37.530 Flexible Data Placement Supported: Not Supported 00:07:37.530 00:07:37.530 Controller Memory Buffer Support 00:07:37.530 ================================ 00:07:37.530 Supported: No 00:07:37.530 00:07:37.530 Persistent Memory Region Support 00:07:37.530 ================================ 00:07:37.530 Supported: No 00:07:37.530 00:07:37.530 Admin Command Set Attributes 00:07:37.530 ============================ 00:07:37.530 Security Send/Receive: Not Supported 00:07:37.530 Format NVM: Supported 00:07:37.530 Firmware Activate/Download: Not Supported 00:07:37.530 Namespace Management: Supported 00:07:37.530 Device Self-Test: Not Supported 00:07:37.530 Directives: Supported 00:07:37.530 NVMe-MI: Not Supported 00:07:37.530 Virtualization Management: Not Supported 00:07:37.530 Doorbell Buffer Config: Supported 00:07:37.530 Get LBA Status Capability: Not Supported 00:07:37.530 Command & Feature Lockdown Capability: Not Supported 00:07:37.530 Abort Command Limit: 4 00:07:37.530 Async Event Request Limit: 4 00:07:37.530 Number of Firmware Slots: N/A 00:07:37.531 Firmware Slot 1 Read-Only: N/A 00:07:37.531 Firmware Activation Without Reset: N/A 00:07:37.531 Multiple Update Detection Support: N/A 00:07:37.531 Firmware Update Granularity: No Information Provided 00:07:37.531 Per-Namespace SMART Log: Yes 00:07:37.531 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.531 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:37.531 Command Effects Log Page: Supported 00:07:37.531 Get Log Page Extended Data: Supported 00:07:37.531 Telemetry Log Pages: Not Supported 00:07:37.531 Persistent Event Log Pages: Not Supported 00:07:37.531 Supported Log Pages Log Page: May Support 00:07:37.531 Commands Supported & Effects Log Page: Not Supported 00:07:37.531 Feature Identifiers & Effects Log Page:May Support 00:07:37.531 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.531 Data Area 4 for Telemetry Log: Not Supported 00:07:37.531 Error Log Page Entries Supported: 1 00:07:37.531 Keep Alive: Not Supported 00:07:37.531 00:07:37.531 NVM Command Set Attributes 00:07:37.531 ========================== 00:07:37.531 Submission Queue Entry Size 00:07:37.531 Max: 64 00:07:37.531 Min: 64 00:07:37.531 Completion Queue Entry Size 00:07:37.531 Max: 16 00:07:37.531 Min: 16 00:07:37.531 Number of Namespaces: 256 00:07:37.531 Compare Command: Supported 00:07:37.531 Write Uncorrectable Command: Not Supported 00:07:37.531 Dataset Management Command: Supported 00:07:37.531 Write Zeroes Command: Supported 00:07:37.531 Set Features Save Field: Supported 00:07:37.531 Reservations: Not Supported 00:07:37.531 Timestamp: Supported 00:07:37.531 Copy: Supported 00:07:37.531 Volatile Write Cache: Present 00:07:37.531 Atomic Write Unit (Normal): 1 00:07:37.531 Atomic Write Unit (PFail): 1 00:07:37.531 Atomic Compare & Write Unit: 1 00:07:37.531 Fused Compare & Write: Not Supported 00:07:37.531 Scatter-Gather List 00:07:37.531 SGL Command Set: Supported 00:07:37.531 SGL Keyed: Not Supported 00:07:37.531 SGL Bit Bucket Descriptor: Not Supported 00:07:37.531 SGL Metadata Pointer: Not Supported 00:07:37.531 Oversized SGL: Not Supported 00:07:37.531 SGL Metadata Address: Not Supported 00:07:37.531 SGL Offset: Not Supported 00:07:37.531 Transport SGL Data Block: Not Supported 00:07:37.531 Replay Protected Memory Block: Not Supported 00:07:37.531 00:07:37.531 Firmware Slot Information 00:07:37.531 ========================= 00:07:37.531 Active slot: 1 00:07:37.531 Slot 1 Firmware Revision: 1.0 00:07:37.531 00:07:37.531 00:07:37.531 Commands Supported and Effects 00:07:37.531 ============================== 00:07:37.531 Admin Commands 00:07:37.531 -------------- 00:07:37.531 Delete I/O Submission Queue (00h): Supported 00:07:37.531 Create I/O Submission Queue (01h): Supported 00:07:37.531 Get Log Page (02h): Supported 00:07:37.531 Delete I/O Completion Queue (04h): Supported 00:07:37.531 Create I/O Completion Queue (05h): Supported 00:07:37.531 Identify (06h): Supported 00:07:37.531 Abort (08h): Supported 00:07:37.531 Set Features (09h): Supported 00:07:37.531 Get Features (0Ah): Supported 00:07:37.531 Asynchronous Event Request (0Ch): Supported 00:07:37.531 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.531 Directive Send (19h): Supported 00:07:37.531 Directive Receive (1Ah): Supported 00:07:37.531 Virtualization Management (1Ch): Supported 00:07:37.531 Doorbell Buffer Config (7Ch): Supported 00:07:37.531 Format NVM (80h): Supported LBA-Change 00:07:37.531 I/O Commands 00:07:37.531 ------------ 00:07:37.531 Flush (00h): Supported LBA-Change 00:07:37.531 Write (01h): Supported LBA-Change 00:07:37.531 Read (02h): Supported 00:07:37.531 Compare (05h): Supported 00:07:37.531 Write Zeroes (08h): Supported LBA-Change 00:07:37.531 Dataset Management (09h): Supported LBA-Change 00:07:37.531 Unknown (0Ch): Supported 00:07:37.531 Unknown (12h): Supported 00:07:37.531 Copy (19h): Supported LBA-Change 00:07:37.531 Unknown (1Dh): Supported LBA-Change 00:07:37.531 00:07:37.531 Error Log 00:07:37.531 ========= 00:07:37.531 00:07:37.531 Arbitration 00:07:37.531 =========== 00:07:37.531 Arbitration Burst: no limit 00:07:37.531 00:07:37.531 Power Management 00:07:37.531 ================ 00:07:37.531 Number of Power States: 1 00:07:37.531 Current Power State: Power State #0 00:07:37.531 Power State #0: 00:07:37.531 Max Power: 25.00 W 00:07:37.531 Non-Operational State: Operational 00:07:37.531 Entry Latency: 16 microseconds 00:07:37.531 Exit Latency: 4 microseconds 00:07:37.531 Relative Read Throughput: 0 00:07:37.531 Relative Read Latency: 0 00:07:37.531 Relative Write Throughput: 0 00:07:37.531 Relative Write Latency: 0 00:07:37.531 Idle Power: Not Reported 00:07:37.531 Active Power: Not Reported 00:07:37.531 Non-Operational Permissive Mode: Not Supported 00:07:37.531 00:07:37.531 Health Information 00:07:37.531 ================== 00:07:37.531 Critical Warnings: 00:07:37.531 Available Spare Space: OK 00:07:37.531 Temperature: OK 00:07:37.531 Device Reliability: OK 00:07:37.531 Read Only: No 00:07:37.531 Volatile Memory Backup: OK 00:07:37.531 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.531 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.531 Available Spare: 0% 00:07:37.531 Available Spare Threshold: 0% 00:07:37.531 Life Percentage Used: 0% 00:07:37.531 Data Units Read: 2219 00:07:37.531 Data Units Written: 2006 00:07:37.531 Host Read Commands: 114774 00:07:37.531 Host Write Commands: 113043 00:07:37.531 Controller Busy Time: 0 minutes 00:07:37.531 Power Cycles: 0 00:07:37.531 Power On Hours: 0 hours 00:07:37.531 Unsafe Shutdowns: 0 00:07:37.531 Unrecoverable Media Errors: 0 00:07:37.531 Lifetime Error Log Entries: 0 00:07:37.531 Warning Temperature Time: 0 minutes 00:07:37.531 Critical Temperature Time: 0 minutes 00:07:37.531 00:07:37.531 Number of Queues 00:07:37.531 ================ 00:07:37.531 Number of I/O Submission Queues: 64 00:07:37.531 Number of I/O Completion Queues: 64 00:07:37.531 00:07:37.531 ZNS Specific Controller Data 00:07:37.532 ============================ 00:07:37.532 Zone Append Size Limit: 0 00:07:37.532 00:07:37.532 00:07:37.532 Active Namespaces 00:07:37.532 ================= 00:07:37.532 Namespace ID:1 00:07:37.532 Error Recovery Timeout: Unlimited 00:07:37.532 Command Set Identifier: NVM (00h) 00:07:37.532 Deallocate: Supported 00:07:37.532 Deallocated/Unwritten Error: Supported 00:07:37.532 Deallocated Read Value: All 0x00 00:07:37.532 Deallocate in Write Zeroes: Not Supported 00:07:37.532 Deallocated Guard Field: 0xFFFF 00:07:37.532 Flush: Supported 00:07:37.532 Reservation: Not Supported 00:07:37.532 Namespace Sharing Capabilities: Private 00:07:37.532 Size (in LBAs): 1048576 (4GiB) 00:07:37.532 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.532 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.532 Thin Provisioning: Not Supported 00:07:37.532 Per-NS Atomic Units: No 00:07:37.532 Maximum Single Source Range Length: 128 00:07:37.532 Maximum Copy Length: 128 00:07:37.532 Maximum Source Range Count: 128 00:07:37.532 NGUID/EUI64 Never Reused: No 00:07:37.532 Namespace Write Protected: No 00:07:37.532 Number of LBA Formats: 8 00:07:37.532 Current LBA Format: LBA Format #04 00:07:37.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.532 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.532 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.532 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.532 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.532 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.532 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.532 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.532 00:07:37.532 NVM Specific Namespace Data 00:07:37.532 =========================== 00:07:37.532 Logical Block Storage Tag Mask: 0 00:07:37.532 Protection Information Capabilities: 00:07:37.532 16b Guard Protection Information Storage Tag Support: No 00:07:37.532 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.532 Storage Tag Check Read Support: No 00:07:37.532 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Namespace ID:2 00:07:37.532 Error Recovery Timeout: Unlimited 00:07:37.532 Command Set Identifier: NVM (00h) 00:07:37.532 Deallocate: Supported 00:07:37.532 Deallocated/Unwritten Error: Supported 00:07:37.532 Deallocated Read Value: All 0x00 00:07:37.532 Deallocate in Write Zeroes: Not Supported 00:07:37.532 Deallocated Guard Field: 0xFFFF 00:07:37.532 Flush: Supported 00:07:37.532 Reservation: Not Supported 00:07:37.532 Namespace Sharing Capabilities: Private 00:07:37.532 Size (in LBAs): 1048576 (4GiB) 00:07:37.532 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.532 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.532 Thin Provisioning: Not Supported 00:07:37.532 Per-NS Atomic Units: No 00:07:37.532 Maximum Single Source Range Length: 128 00:07:37.532 Maximum Copy Length: 128 00:07:37.532 Maximum Source Range Count: 128 00:07:37.532 NGUID/EUI64 Never Reused: No 00:07:37.532 Namespace Write Protected: No 00:07:37.532 Number of LBA Formats: 8 00:07:37.532 Current LBA Format: LBA Format #04 00:07:37.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.532 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.532 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.532 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.532 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.532 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.532 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.532 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.532 00:07:37.532 NVM Specific Namespace Data 00:07:37.532 =========================== 00:07:37.532 Logical Block Storage Tag Mask: 0 00:07:37.532 Protection Information Capabilities: 00:07:37.532 16b Guard Protection Information Storage Tag Support: No 00:07:37.532 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.532 Storage Tag Check Read Support: No 00:07:37.532 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.532 Namespace ID:3 00:07:37.532 Error Recovery Timeout: Unlimited 00:07:37.532 Command Set Identifier: NVM (00h) 00:07:37.532 Deallocate: Supported 00:07:37.532 Deallocated/Unwritten Error: Supported 00:07:37.532 Deallocated Read Value: All 0x00 00:07:37.532 Deallocate in Write Zeroes: Not Supported 00:07:37.532 Deallocated Guard Field: 0xFFFF 00:07:37.532 Flush: Supported 00:07:37.532 Reservation: Not Supported 00:07:37.532 Namespace Sharing Capabilities: Private 00:07:37.532 Size (in LBAs): 1048576 (4GiB) 00:07:37.532 Capacity (in LBAs): 1048576 (4GiB) 00:07:37.532 Utilization (in LBAs): 1048576 (4GiB) 00:07:37.532 Thin Provisioning: Not Supported 00:07:37.532 Per-NS Atomic Units: No 00:07:37.532 Maximum Single Source Range Length: 128 00:07:37.532 Maximum Copy Length: 128 00:07:37.532 Maximum Source Range Count: 128 00:07:37.532 NGUID/EUI64 Never Reused: No 00:07:37.532 Namespace Write Protected: No 00:07:37.532 Number of LBA Formats: 8 00:07:37.532 Current LBA Format: LBA Format #04 00:07:37.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.532 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.532 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.532 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.533 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.533 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.533 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.533 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.533 00:07:37.533 NVM Specific Namespace Data 00:07:37.533 =========================== 00:07:37.533 Logical Block Storage Tag Mask: 0 00:07:37.533 Protection Information Capabilities: 00:07:37.533 16b Guard Protection Information Storage Tag Support: No 00:07:37.533 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.533 Storage Tag Check Read Support: No 00:07:37.533 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.533 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.533 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:37.827 ===================================================== 00:07:37.827 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:37.827 ===================================================== 00:07:37.827 Controller Capabilities/Features 00:07:37.827 ================================ 00:07:37.827 Vendor ID: 1b36 00:07:37.827 Subsystem Vendor ID: 1af4 00:07:37.827 Serial Number: 12340 00:07:37.827 Model Number: QEMU NVMe Ctrl 00:07:37.827 Firmware Version: 8.0.0 00:07:37.827 Recommended Arb Burst: 6 00:07:37.827 IEEE OUI Identifier: 00 54 52 00:07:37.827 Multi-path I/O 00:07:37.827 May have multiple subsystem ports: No 00:07:37.827 May have multiple controllers: No 00:07:37.827 Associated with SR-IOV VF: No 00:07:37.827 Max Data Transfer Size: 524288 00:07:37.827 Max Number of Namespaces: 256 00:07:37.827 Max Number of I/O Queues: 64 00:07:37.827 NVMe Specification Version (VS): 1.4 00:07:37.827 NVMe Specification Version (Identify): 1.4 00:07:37.827 Maximum Queue Entries: 2048 00:07:37.827 Contiguous Queues Required: Yes 00:07:37.827 Arbitration Mechanisms Supported 00:07:37.827 Weighted Round Robin: Not Supported 00:07:37.827 Vendor Specific: Not Supported 00:07:37.827 Reset Timeout: 7500 ms 00:07:37.827 Doorbell Stride: 4 bytes 00:07:37.827 NVM Subsystem Reset: Not Supported 00:07:37.827 Command Sets Supported 00:07:37.827 NVM Command Set: Supported 00:07:37.827 Boot Partition: Not Supported 00:07:37.827 Memory Page Size Minimum: 4096 bytes 00:07:37.827 Memory Page Size Maximum: 65536 bytes 00:07:37.827 Persistent Memory Region: Not Supported 00:07:37.827 Optional Asynchronous Events Supported 00:07:37.827 Namespace Attribute Notices: Supported 00:07:37.827 Firmware Activation Notices: Not Supported 00:07:37.827 ANA Change Notices: Not Supported 00:07:37.827 PLE Aggregate Log Change Notices: Not Supported 00:07:37.827 LBA Status Info Alert Notices: Not Supported 00:07:37.827 EGE Aggregate Log Change Notices: Not Supported 00:07:37.827 Normal NVM Subsystem Shutdown event: Not Supported 00:07:37.827 Zone Descriptor Change Notices: Not Supported 00:07:37.827 Discovery Log Change Notices: Not Supported 00:07:37.827 Controller Attributes 00:07:37.827 128-bit Host Identifier: Not Supported 00:07:37.827 Non-Operational Permissive Mode: Not Supported 00:07:37.827 NVM Sets: Not Supported 00:07:37.827 Read Recovery Levels: Not Supported 00:07:37.827 Endurance Groups: Not Supported 00:07:37.827 Predictable Latency Mode: Not Supported 00:07:37.827 Traffic Based Keep ALive: Not Supported 00:07:37.827 Namespace Granularity: Not Supported 00:07:37.827 SQ Associations: Not Supported 00:07:37.827 UUID List: Not Supported 00:07:37.827 Multi-Domain Subsystem: Not Supported 00:07:37.827 Fixed Capacity Management: Not Supported 00:07:37.827 Variable Capacity Management: Not Supported 00:07:37.827 Delete Endurance Group: Not Supported 00:07:37.827 Delete NVM Set: Not Supported 00:07:37.827 Extended LBA Formats Supported: Supported 00:07:37.827 Flexible Data Placement Supported: Not Supported 00:07:37.827 00:07:37.827 Controller Memory Buffer Support 00:07:37.827 ================================ 00:07:37.827 Supported: No 00:07:37.827 00:07:37.827 Persistent Memory Region Support 00:07:37.827 ================================ 00:07:37.827 Supported: No 00:07:37.827 00:07:37.827 Admin Command Set Attributes 00:07:37.827 ============================ 00:07:37.827 Security Send/Receive: Not Supported 00:07:37.827 Format NVM: Supported 00:07:37.827 Firmware Activate/Download: Not Supported 00:07:37.827 Namespace Management: Supported 00:07:37.827 Device Self-Test: Not Supported 00:07:37.827 Directives: Supported 00:07:37.827 NVMe-MI: Not Supported 00:07:37.827 Virtualization Management: Not Supported 00:07:37.827 Doorbell Buffer Config: Supported 00:07:37.827 Get LBA Status Capability: Not Supported 00:07:37.827 Command & Feature Lockdown Capability: Not Supported 00:07:37.827 Abort Command Limit: 4 00:07:37.827 Async Event Request Limit: 4 00:07:37.827 Number of Firmware Slots: N/A 00:07:37.827 Firmware Slot 1 Read-Only: N/A 00:07:37.827 Firmware Activation Without Reset: N/A 00:07:37.827 Multiple Update Detection Support: N/A 00:07:37.827 Firmware Update Granularity: No Information Provided 00:07:37.827 Per-Namespace SMART Log: Yes 00:07:37.827 Asymmetric Namespace Access Log Page: Not Supported 00:07:37.827 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:37.827 Command Effects Log Page: Supported 00:07:37.827 Get Log Page Extended Data: Supported 00:07:37.827 Telemetry Log Pages: Not Supported 00:07:37.827 Persistent Event Log Pages: Not Supported 00:07:37.827 Supported Log Pages Log Page: May Support 00:07:37.827 Commands Supported & Effects Log Page: Not Supported 00:07:37.827 Feature Identifiers & Effects Log Page:May Support 00:07:37.827 NVMe-MI Commands & Effects Log Page: May Support 00:07:37.827 Data Area 4 for Telemetry Log: Not Supported 00:07:37.827 Error Log Page Entries Supported: 1 00:07:37.827 Keep Alive: Not Supported 00:07:37.827 00:07:37.827 NVM Command Set Attributes 00:07:37.827 ========================== 00:07:37.828 Submission Queue Entry Size 00:07:37.828 Max: 64 00:07:37.828 Min: 64 00:07:37.828 Completion Queue Entry Size 00:07:37.828 Max: 16 00:07:37.828 Min: 16 00:07:37.828 Number of Namespaces: 256 00:07:37.828 Compare Command: Supported 00:07:37.828 Write Uncorrectable Command: Not Supported 00:07:37.828 Dataset Management Command: Supported 00:07:37.828 Write Zeroes Command: Supported 00:07:37.828 Set Features Save Field: Supported 00:07:37.828 Reservations: Not Supported 00:07:37.828 Timestamp: Supported 00:07:37.828 Copy: Supported 00:07:37.828 Volatile Write Cache: Present 00:07:37.828 Atomic Write Unit (Normal): 1 00:07:37.828 Atomic Write Unit (PFail): 1 00:07:37.828 Atomic Compare & Write Unit: 1 00:07:37.828 Fused Compare & Write: Not Supported 00:07:37.828 Scatter-Gather List 00:07:37.828 SGL Command Set: Supported 00:07:37.828 SGL Keyed: Not Supported 00:07:37.828 SGL Bit Bucket Descriptor: Not Supported 00:07:37.828 SGL Metadata Pointer: Not Supported 00:07:37.828 Oversized SGL: Not Supported 00:07:37.828 SGL Metadata Address: Not Supported 00:07:37.828 SGL Offset: Not Supported 00:07:37.828 Transport SGL Data Block: Not Supported 00:07:37.828 Replay Protected Memory Block: Not Supported 00:07:37.828 00:07:37.828 Firmware Slot Information 00:07:37.828 ========================= 00:07:37.828 Active slot: 1 00:07:37.828 Slot 1 Firmware Revision: 1.0 00:07:37.828 00:07:37.828 00:07:37.828 Commands Supported and Effects 00:07:37.828 ============================== 00:07:37.828 Admin Commands 00:07:37.828 -------------- 00:07:37.828 Delete I/O Submission Queue (00h): Supported 00:07:37.828 Create I/O Submission Queue (01h): Supported 00:07:37.828 Get Log Page (02h): Supported 00:07:37.828 Delete I/O Completion Queue (04h): Supported 00:07:37.828 Create I/O Completion Queue (05h): Supported 00:07:37.828 Identify (06h): Supported 00:07:37.828 Abort (08h): Supported 00:07:37.828 Set Features (09h): Supported 00:07:37.828 Get Features (0Ah): Supported 00:07:37.828 Asynchronous Event Request (0Ch): Supported 00:07:37.828 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:37.828 Directive Send (19h): Supported 00:07:37.828 Directive Receive (1Ah): Supported 00:07:37.828 Virtualization Management (1Ch): Supported 00:07:37.828 Doorbell Buffer Config (7Ch): Supported 00:07:37.828 Format NVM (80h): Supported LBA-Change 00:07:37.828 I/O Commands 00:07:37.828 ------------ 00:07:37.828 Flush (00h): Supported LBA-Change 00:07:37.828 Write (01h): Supported LBA-Change 00:07:37.828 Read (02h): Supported 00:07:37.828 Compare (05h): Supported 00:07:37.828 Write Zeroes (08h): Supported LBA-Change 00:07:37.828 Dataset Management (09h): Supported LBA-Change 00:07:37.828 Unknown (0Ch): Supported 00:07:37.828 Unknown (12h): Supported 00:07:37.828 Copy (19h): Supported LBA-Change 00:07:37.828 Unknown (1Dh): Supported LBA-Change 00:07:37.828 00:07:37.828 Error Log 00:07:37.828 ========= 00:07:37.828 00:07:37.828 Arbitration 00:07:37.828 =========== 00:07:37.828 Arbitration Burst: no limit 00:07:37.828 00:07:37.828 Power Management 00:07:37.828 ================ 00:07:37.828 Number of Power States: 1 00:07:37.828 Current Power State: Power State #0 00:07:37.828 Power State #0: 00:07:37.828 Max Power: 25.00 W 00:07:37.828 Non-Operational State: Operational 00:07:37.828 Entry Latency: 16 microseconds 00:07:37.828 Exit Latency: 4 microseconds 00:07:37.828 Relative Read Throughput: 0 00:07:37.828 Relative Read Latency: 0 00:07:37.828 Relative Write Throughput: 0 00:07:37.828 Relative Write Latency: 0 00:07:37.828 Idle Power: Not Reported 00:07:37.828 Active Power: Not Reported 00:07:37.828 Non-Operational Permissive Mode: Not Supported 00:07:37.828 00:07:37.828 Health Information 00:07:37.828 ================== 00:07:37.828 Critical Warnings: 00:07:37.828 Available Spare Space: OK 00:07:37.828 Temperature: OK 00:07:37.828 Device Reliability: OK 00:07:37.828 Read Only: No 00:07:37.828 Volatile Memory Backup: OK 00:07:37.828 Current Temperature: 323 Kelvin (50 Celsius) 00:07:37.828 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:37.828 Available Spare: 0% 00:07:37.828 Available Spare Threshold: 0% 00:07:37.828 Life Percentage Used: 0% 00:07:37.828 Data Units Read: 694 00:07:37.828 Data Units Written: 622 00:07:37.828 Host Read Commands: 37477 00:07:37.828 Host Write Commands: 37263 00:07:37.828 Controller Busy Time: 0 minutes 00:07:37.828 Power Cycles: 0 00:07:37.828 Power On Hours: 0 hours 00:07:37.828 Unsafe Shutdowns: 0 00:07:37.828 Unrecoverable Media Errors: 0 00:07:37.828 Lifetime Error Log Entries: 0 00:07:37.828 Warning Temperature Time: 0 minutes 00:07:37.828 Critical Temperature Time: 0 minutes 00:07:37.828 00:07:37.828 Number of Queues 00:07:37.828 ================ 00:07:37.828 Number of I/O Submission Queues: 64 00:07:37.828 Number of I/O Completion Queues: 64 00:07:37.828 00:07:37.828 ZNS Specific Controller Data 00:07:37.828 ============================ 00:07:37.828 Zone Append Size Limit: 0 00:07:37.828 00:07:37.828 00:07:37.828 Active Namespaces 00:07:37.828 ================= 00:07:37.828 Namespace ID:1 00:07:37.828 Error Recovery Timeout: Unlimited 00:07:37.828 Command Set Identifier: NVM (00h) 00:07:37.828 Deallocate: Supported 00:07:37.828 Deallocated/Unwritten Error: Supported 00:07:37.828 Deallocated Read Value: All 0x00 00:07:37.828 Deallocate in Write Zeroes: Not Supported 00:07:37.828 Deallocated Guard Field: 0xFFFF 00:07:37.828 Flush: Supported 00:07:37.828 Reservation: Not Supported 00:07:37.828 Metadata Transferred as: Separate Metadata Buffer 00:07:37.828 Namespace Sharing Capabilities: Private 00:07:37.828 Size (in LBAs): 1548666 (5GiB) 00:07:37.828 Capacity (in LBAs): 1548666 (5GiB) 00:07:37.828 Utilization (in LBAs): 1548666 (5GiB) 00:07:37.828 Thin Provisioning: Not Supported 00:07:37.828 Per-NS Atomic Units: No 00:07:37.828 Maximum Single Source Range Length: 128 00:07:37.828 Maximum Copy Length: 128 00:07:37.828 Maximum Source Range Count: 128 00:07:37.828 NGUID/EUI64 Never Reused: No 00:07:37.828 Namespace Write Protected: No 00:07:37.828 Number of LBA Formats: 8 00:07:37.828 Current LBA Format: LBA Format #07 00:07:37.828 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:37.828 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:37.828 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:37.828 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:37.828 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:37.828 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:37.828 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:37.828 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:37.828 00:07:37.828 NVM Specific Namespace Data 00:07:37.828 =========================== 00:07:37.828 Logical Block Storage Tag Mask: 0 00:07:37.828 Protection Information Capabilities: 00:07:37.828 16b Guard Protection Information Storage Tag Support: No 00:07:37.828 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:37.828 Storage Tag Check Read Support: No 00:07:37.828 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:37.828 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:37.828 03:58:30 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:38.087 ===================================================== 00:07:38.087 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:38.087 ===================================================== 00:07:38.087 Controller Capabilities/Features 00:07:38.087 ================================ 00:07:38.087 Vendor ID: 1b36 00:07:38.087 Subsystem Vendor ID: 1af4 00:07:38.087 Serial Number: 12341 00:07:38.087 Model Number: QEMU NVMe Ctrl 00:07:38.087 Firmware Version: 8.0.0 00:07:38.087 Recommended Arb Burst: 6 00:07:38.087 IEEE OUI Identifier: 00 54 52 00:07:38.087 Multi-path I/O 00:07:38.087 May have multiple subsystem ports: No 00:07:38.087 May have multiple controllers: No 00:07:38.087 Associated with SR-IOV VF: No 00:07:38.087 Max Data Transfer Size: 524288 00:07:38.087 Max Number of Namespaces: 256 00:07:38.087 Max Number of I/O Queues: 64 00:07:38.087 NVMe Specification Version (VS): 1.4 00:07:38.087 NVMe Specification Version (Identify): 1.4 00:07:38.087 Maximum Queue Entries: 2048 00:07:38.087 Contiguous Queues Required: Yes 00:07:38.087 Arbitration Mechanisms Supported 00:07:38.087 Weighted Round Robin: Not Supported 00:07:38.087 Vendor Specific: Not Supported 00:07:38.087 Reset Timeout: 7500 ms 00:07:38.087 Doorbell Stride: 4 bytes 00:07:38.087 NVM Subsystem Reset: Not Supported 00:07:38.087 Command Sets Supported 00:07:38.087 NVM Command Set: Supported 00:07:38.087 Boot Partition: Not Supported 00:07:38.087 Memory Page Size Minimum: 4096 bytes 00:07:38.087 Memory Page Size Maximum: 65536 bytes 00:07:38.087 Persistent Memory Region: Not Supported 00:07:38.087 Optional Asynchronous Events Supported 00:07:38.087 Namespace Attribute Notices: Supported 00:07:38.087 Firmware Activation Notices: Not Supported 00:07:38.087 ANA Change Notices: Not Supported 00:07:38.087 PLE Aggregate Log Change Notices: Not Supported 00:07:38.087 LBA Status Info Alert Notices: Not Supported 00:07:38.087 EGE Aggregate Log Change Notices: Not Supported 00:07:38.087 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.087 Zone Descriptor Change Notices: Not Supported 00:07:38.087 Discovery Log Change Notices: Not Supported 00:07:38.087 Controller Attributes 00:07:38.087 128-bit Host Identifier: Not Supported 00:07:38.087 Non-Operational Permissive Mode: Not Supported 00:07:38.087 NVM Sets: Not Supported 00:07:38.087 Read Recovery Levels: Not Supported 00:07:38.087 Endurance Groups: Not Supported 00:07:38.087 Predictable Latency Mode: Not Supported 00:07:38.087 Traffic Based Keep ALive: Not Supported 00:07:38.087 Namespace Granularity: Not Supported 00:07:38.087 SQ Associations: Not Supported 00:07:38.087 UUID List: Not Supported 00:07:38.087 Multi-Domain Subsystem: Not Supported 00:07:38.087 Fixed Capacity Management: Not Supported 00:07:38.087 Variable Capacity Management: Not Supported 00:07:38.087 Delete Endurance Group: Not Supported 00:07:38.087 Delete NVM Set: Not Supported 00:07:38.087 Extended LBA Formats Supported: Supported 00:07:38.087 Flexible Data Placement Supported: Not Supported 00:07:38.087 00:07:38.087 Controller Memory Buffer Support 00:07:38.087 ================================ 00:07:38.087 Supported: No 00:07:38.087 00:07:38.087 Persistent Memory Region Support 00:07:38.087 ================================ 00:07:38.087 Supported: No 00:07:38.087 00:07:38.087 Admin Command Set Attributes 00:07:38.087 ============================ 00:07:38.087 Security Send/Receive: Not Supported 00:07:38.087 Format NVM: Supported 00:07:38.087 Firmware Activate/Download: Not Supported 00:07:38.087 Namespace Management: Supported 00:07:38.087 Device Self-Test: Not Supported 00:07:38.087 Directives: Supported 00:07:38.087 NVMe-MI: Not Supported 00:07:38.087 Virtualization Management: Not Supported 00:07:38.087 Doorbell Buffer Config: Supported 00:07:38.087 Get LBA Status Capability: Not Supported 00:07:38.087 Command & Feature Lockdown Capability: Not Supported 00:07:38.087 Abort Command Limit: 4 00:07:38.087 Async Event Request Limit: 4 00:07:38.087 Number of Firmware Slots: N/A 00:07:38.087 Firmware Slot 1 Read-Only: N/A 00:07:38.087 Firmware Activation Without Reset: N/A 00:07:38.087 Multiple Update Detection Support: N/A 00:07:38.087 Firmware Update Granularity: No Information Provided 00:07:38.087 Per-Namespace SMART Log: Yes 00:07:38.087 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.087 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:38.087 Command Effects Log Page: Supported 00:07:38.087 Get Log Page Extended Data: Supported 00:07:38.087 Telemetry Log Pages: Not Supported 00:07:38.087 Persistent Event Log Pages: Not Supported 00:07:38.087 Supported Log Pages Log Page: May Support 00:07:38.087 Commands Supported & Effects Log Page: Not Supported 00:07:38.087 Feature Identifiers & Effects Log Page:May Support 00:07:38.087 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.087 Data Area 4 for Telemetry Log: Not Supported 00:07:38.087 Error Log Page Entries Supported: 1 00:07:38.088 Keep Alive: Not Supported 00:07:38.088 00:07:38.088 NVM Command Set Attributes 00:07:38.088 ========================== 00:07:38.088 Submission Queue Entry Size 00:07:38.088 Max: 64 00:07:38.088 Min: 64 00:07:38.088 Completion Queue Entry Size 00:07:38.088 Max: 16 00:07:38.088 Min: 16 00:07:38.088 Number of Namespaces: 256 00:07:38.088 Compare Command: Supported 00:07:38.088 Write Uncorrectable Command: Not Supported 00:07:38.088 Dataset Management Command: Supported 00:07:38.088 Write Zeroes Command: Supported 00:07:38.088 Set Features Save Field: Supported 00:07:38.088 Reservations: Not Supported 00:07:38.088 Timestamp: Supported 00:07:38.088 Copy: Supported 00:07:38.088 Volatile Write Cache: Present 00:07:38.088 Atomic Write Unit (Normal): 1 00:07:38.088 Atomic Write Unit (PFail): 1 00:07:38.088 Atomic Compare & Write Unit: 1 00:07:38.088 Fused Compare & Write: Not Supported 00:07:38.088 Scatter-Gather List 00:07:38.088 SGL Command Set: Supported 00:07:38.088 SGL Keyed: Not Supported 00:07:38.088 SGL Bit Bucket Descriptor: Not Supported 00:07:38.088 SGL Metadata Pointer: Not Supported 00:07:38.088 Oversized SGL: Not Supported 00:07:38.088 SGL Metadata Address: Not Supported 00:07:38.088 SGL Offset: Not Supported 00:07:38.088 Transport SGL Data Block: Not Supported 00:07:38.088 Replay Protected Memory Block: Not Supported 00:07:38.088 00:07:38.088 Firmware Slot Information 00:07:38.088 ========================= 00:07:38.088 Active slot: 1 00:07:38.088 Slot 1 Firmware Revision: 1.0 00:07:38.088 00:07:38.088 00:07:38.088 Commands Supported and Effects 00:07:38.088 ============================== 00:07:38.088 Admin Commands 00:07:38.088 -------------- 00:07:38.088 Delete I/O Submission Queue (00h): Supported 00:07:38.088 Create I/O Submission Queue (01h): Supported 00:07:38.088 Get Log Page (02h): Supported 00:07:38.088 Delete I/O Completion Queue (04h): Supported 00:07:38.088 Create I/O Completion Queue (05h): Supported 00:07:38.088 Identify (06h): Supported 00:07:38.088 Abort (08h): Supported 00:07:38.088 Set Features (09h): Supported 00:07:38.088 Get Features (0Ah): Supported 00:07:38.088 Asynchronous Event Request (0Ch): Supported 00:07:38.088 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.088 Directive Send (19h): Supported 00:07:38.088 Directive Receive (1Ah): Supported 00:07:38.088 Virtualization Management (1Ch): Supported 00:07:38.088 Doorbell Buffer Config (7Ch): Supported 00:07:38.088 Format NVM (80h): Supported LBA-Change 00:07:38.088 I/O Commands 00:07:38.088 ------------ 00:07:38.088 Flush (00h): Supported LBA-Change 00:07:38.088 Write (01h): Supported LBA-Change 00:07:38.088 Read (02h): Supported 00:07:38.088 Compare (05h): Supported 00:07:38.088 Write Zeroes (08h): Supported LBA-Change 00:07:38.088 Dataset Management (09h): Supported LBA-Change 00:07:38.088 Unknown (0Ch): Supported 00:07:38.088 Unknown (12h): Supported 00:07:38.088 Copy (19h): Supported LBA-Change 00:07:38.088 Unknown (1Dh): Supported LBA-Change 00:07:38.088 00:07:38.088 Error Log 00:07:38.088 ========= 00:07:38.088 00:07:38.088 Arbitration 00:07:38.088 =========== 00:07:38.088 Arbitration Burst: no limit 00:07:38.088 00:07:38.088 Power Management 00:07:38.088 ================ 00:07:38.088 Number of Power States: 1 00:07:38.088 Current Power State: Power State #0 00:07:38.088 Power State #0: 00:07:38.088 Max Power: 25.00 W 00:07:38.088 Non-Operational State: Operational 00:07:38.088 Entry Latency: 16 microseconds 00:07:38.088 Exit Latency: 4 microseconds 00:07:38.088 Relative Read Throughput: 0 00:07:38.088 Relative Read Latency: 0 00:07:38.088 Relative Write Throughput: 0 00:07:38.088 Relative Write Latency: 0 00:07:38.088 Idle Power: Not Reported 00:07:38.088 Active Power: Not Reported 00:07:38.088 Non-Operational Permissive Mode: Not Supported 00:07:38.088 00:07:38.088 Health Information 00:07:38.088 ================== 00:07:38.088 Critical Warnings: 00:07:38.088 Available Spare Space: OK 00:07:38.088 Temperature: OK 00:07:38.088 Device Reliability: OK 00:07:38.088 Read Only: No 00:07:38.088 Volatile Memory Backup: OK 00:07:38.088 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.088 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.088 Available Spare: 0% 00:07:38.088 Available Spare Threshold: 0% 00:07:38.088 Life Percentage Used: 0% 00:07:38.088 Data Units Read: 1086 00:07:38.088 Data Units Written: 952 00:07:38.088 Host Read Commands: 57024 00:07:38.088 Host Write Commands: 55811 00:07:38.088 Controller Busy Time: 0 minutes 00:07:38.088 Power Cycles: 0 00:07:38.088 Power On Hours: 0 hours 00:07:38.088 Unsafe Shutdowns: 0 00:07:38.088 Unrecoverable Media Errors: 0 00:07:38.088 Lifetime Error Log Entries: 0 00:07:38.088 Warning Temperature Time: 0 minutes 00:07:38.088 Critical Temperature Time: 0 minutes 00:07:38.088 00:07:38.088 Number of Queues 00:07:38.088 ================ 00:07:38.088 Number of I/O Submission Queues: 64 00:07:38.088 Number of I/O Completion Queues: 64 00:07:38.088 00:07:38.088 ZNS Specific Controller Data 00:07:38.088 ============================ 00:07:38.088 Zone Append Size Limit: 0 00:07:38.088 00:07:38.088 00:07:38.088 Active Namespaces 00:07:38.088 ================= 00:07:38.088 Namespace ID:1 00:07:38.088 Error Recovery Timeout: Unlimited 00:07:38.088 Command Set Identifier: NVM (00h) 00:07:38.088 Deallocate: Supported 00:07:38.088 Deallocated/Unwritten Error: Supported 00:07:38.088 Deallocated Read Value: All 0x00 00:07:38.088 Deallocate in Write Zeroes: Not Supported 00:07:38.088 Deallocated Guard Field: 0xFFFF 00:07:38.088 Flush: Supported 00:07:38.088 Reservation: Not Supported 00:07:38.088 Namespace Sharing Capabilities: Private 00:07:38.088 Size (in LBAs): 1310720 (5GiB) 00:07:38.088 Capacity (in LBAs): 1310720 (5GiB) 00:07:38.088 Utilization (in LBAs): 1310720 (5GiB) 00:07:38.088 Thin Provisioning: Not Supported 00:07:38.088 Per-NS Atomic Units: No 00:07:38.088 Maximum Single Source Range Length: 128 00:07:38.088 Maximum Copy Length: 128 00:07:38.088 Maximum Source Range Count: 128 00:07:38.088 NGUID/EUI64 Never Reused: No 00:07:38.088 Namespace Write Protected: No 00:07:38.088 Number of LBA Formats: 8 00:07:38.088 Current LBA Format: LBA Format #04 00:07:38.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.088 00:07:38.088 NVM Specific Namespace Data 00:07:38.088 =========================== 00:07:38.088 Logical Block Storage Tag Mask: 0 00:07:38.088 Protection Information Capabilities: 00:07:38.088 16b Guard Protection Information Storage Tag Support: No 00:07:38.088 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.088 Storage Tag Check Read Support: No 00:07:38.088 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.088 03:58:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.088 03:58:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:38.348 ===================================================== 00:07:38.348 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:38.348 ===================================================== 00:07:38.348 Controller Capabilities/Features 00:07:38.348 ================================ 00:07:38.348 Vendor ID: 1b36 00:07:38.348 Subsystem Vendor ID: 1af4 00:07:38.348 Serial Number: 12342 00:07:38.348 Model Number: QEMU NVMe Ctrl 00:07:38.348 Firmware Version: 8.0.0 00:07:38.348 Recommended Arb Burst: 6 00:07:38.348 IEEE OUI Identifier: 00 54 52 00:07:38.348 Multi-path I/O 00:07:38.348 May have multiple subsystem ports: No 00:07:38.348 May have multiple controllers: No 00:07:38.348 Associated with SR-IOV VF: No 00:07:38.348 Max Data Transfer Size: 524288 00:07:38.348 Max Number of Namespaces: 256 00:07:38.348 Max Number of I/O Queues: 64 00:07:38.348 NVMe Specification Version (VS): 1.4 00:07:38.348 NVMe Specification Version (Identify): 1.4 00:07:38.348 Maximum Queue Entries: 2048 00:07:38.348 Contiguous Queues Required: Yes 00:07:38.348 Arbitration Mechanisms Supported 00:07:38.348 Weighted Round Robin: Not Supported 00:07:38.348 Vendor Specific: Not Supported 00:07:38.348 Reset Timeout: 7500 ms 00:07:38.348 Doorbell Stride: 4 bytes 00:07:38.348 NVM Subsystem Reset: Not Supported 00:07:38.348 Command Sets Supported 00:07:38.348 NVM Command Set: Supported 00:07:38.348 Boot Partition: Not Supported 00:07:38.349 Memory Page Size Minimum: 4096 bytes 00:07:38.349 Memory Page Size Maximum: 65536 bytes 00:07:38.349 Persistent Memory Region: Not Supported 00:07:38.349 Optional Asynchronous Events Supported 00:07:38.349 Namespace Attribute Notices: Supported 00:07:38.349 Firmware Activation Notices: Not Supported 00:07:38.349 ANA Change Notices: Not Supported 00:07:38.349 PLE Aggregate Log Change Notices: Not Supported 00:07:38.349 LBA Status Info Alert Notices: Not Supported 00:07:38.349 EGE Aggregate Log Change Notices: Not Supported 00:07:38.349 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.349 Zone Descriptor Change Notices: Not Supported 00:07:38.349 Discovery Log Change Notices: Not Supported 00:07:38.349 Controller Attributes 00:07:38.349 128-bit Host Identifier: Not Supported 00:07:38.349 Non-Operational Permissive Mode: Not Supported 00:07:38.349 NVM Sets: Not Supported 00:07:38.349 Read Recovery Levels: Not Supported 00:07:38.349 Endurance Groups: Not Supported 00:07:38.349 Predictable Latency Mode: Not Supported 00:07:38.349 Traffic Based Keep ALive: Not Supported 00:07:38.349 Namespace Granularity: Not Supported 00:07:38.349 SQ Associations: Not Supported 00:07:38.349 UUID List: Not Supported 00:07:38.349 Multi-Domain Subsystem: Not Supported 00:07:38.349 Fixed Capacity Management: Not Supported 00:07:38.349 Variable Capacity Management: Not Supported 00:07:38.349 Delete Endurance Group: Not Supported 00:07:38.349 Delete NVM Set: Not Supported 00:07:38.349 Extended LBA Formats Supported: Supported 00:07:38.349 Flexible Data Placement Supported: Not Supported 00:07:38.349 00:07:38.349 Controller Memory Buffer Support 00:07:38.349 ================================ 00:07:38.349 Supported: No 00:07:38.349 00:07:38.349 Persistent Memory Region Support 00:07:38.349 ================================ 00:07:38.349 Supported: No 00:07:38.349 00:07:38.349 Admin Command Set Attributes 00:07:38.349 ============================ 00:07:38.349 Security Send/Receive: Not Supported 00:07:38.349 Format NVM: Supported 00:07:38.349 Firmware Activate/Download: Not Supported 00:07:38.349 Namespace Management: Supported 00:07:38.349 Device Self-Test: Not Supported 00:07:38.349 Directives: Supported 00:07:38.349 NVMe-MI: Not Supported 00:07:38.349 Virtualization Management: Not Supported 00:07:38.349 Doorbell Buffer Config: Supported 00:07:38.349 Get LBA Status Capability: Not Supported 00:07:38.349 Command & Feature Lockdown Capability: Not Supported 00:07:38.349 Abort Command Limit: 4 00:07:38.349 Async Event Request Limit: 4 00:07:38.349 Number of Firmware Slots: N/A 00:07:38.349 Firmware Slot 1 Read-Only: N/A 00:07:38.349 Firmware Activation Without Reset: N/A 00:07:38.349 Multiple Update Detection Support: N/A 00:07:38.349 Firmware Update Granularity: No Information Provided 00:07:38.349 Per-Namespace SMART Log: Yes 00:07:38.349 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.349 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:38.349 Command Effects Log Page: Supported 00:07:38.349 Get Log Page Extended Data: Supported 00:07:38.349 Telemetry Log Pages: Not Supported 00:07:38.349 Persistent Event Log Pages: Not Supported 00:07:38.349 Supported Log Pages Log Page: May Support 00:07:38.349 Commands Supported & Effects Log Page: Not Supported 00:07:38.349 Feature Identifiers & Effects Log Page:May Support 00:07:38.349 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.349 Data Area 4 for Telemetry Log: Not Supported 00:07:38.349 Error Log Page Entries Supported: 1 00:07:38.349 Keep Alive: Not Supported 00:07:38.349 00:07:38.349 NVM Command Set Attributes 00:07:38.349 ========================== 00:07:38.349 Submission Queue Entry Size 00:07:38.349 Max: 64 00:07:38.349 Min: 64 00:07:38.349 Completion Queue Entry Size 00:07:38.349 Max: 16 00:07:38.349 Min: 16 00:07:38.349 Number of Namespaces: 256 00:07:38.349 Compare Command: Supported 00:07:38.349 Write Uncorrectable Command: Not Supported 00:07:38.349 Dataset Management Command: Supported 00:07:38.349 Write Zeroes Command: Supported 00:07:38.349 Set Features Save Field: Supported 00:07:38.349 Reservations: Not Supported 00:07:38.349 Timestamp: Supported 00:07:38.349 Copy: Supported 00:07:38.349 Volatile Write Cache: Present 00:07:38.349 Atomic Write Unit (Normal): 1 00:07:38.349 Atomic Write Unit (PFail): 1 00:07:38.349 Atomic Compare & Write Unit: 1 00:07:38.349 Fused Compare & Write: Not Supported 00:07:38.349 Scatter-Gather List 00:07:38.349 SGL Command Set: Supported 00:07:38.349 SGL Keyed: Not Supported 00:07:38.349 SGL Bit Bucket Descriptor: Not Supported 00:07:38.349 SGL Metadata Pointer: Not Supported 00:07:38.349 Oversized SGL: Not Supported 00:07:38.349 SGL Metadata Address: Not Supported 00:07:38.349 SGL Offset: Not Supported 00:07:38.349 Transport SGL Data Block: Not Supported 00:07:38.349 Replay Protected Memory Block: Not Supported 00:07:38.349 00:07:38.349 Firmware Slot Information 00:07:38.349 ========================= 00:07:38.349 Active slot: 1 00:07:38.349 Slot 1 Firmware Revision: 1.0 00:07:38.349 00:07:38.349 00:07:38.349 Commands Supported and Effects 00:07:38.349 ============================== 00:07:38.349 Admin Commands 00:07:38.349 -------------- 00:07:38.349 Delete I/O Submission Queue (00h): Supported 00:07:38.349 Create I/O Submission Queue (01h): Supported 00:07:38.349 Get Log Page (02h): Supported 00:07:38.349 Delete I/O Completion Queue (04h): Supported 00:07:38.349 Create I/O Completion Queue (05h): Supported 00:07:38.349 Identify (06h): Supported 00:07:38.349 Abort (08h): Supported 00:07:38.349 Set Features (09h): Supported 00:07:38.349 Get Features (0Ah): Supported 00:07:38.349 Asynchronous Event Request (0Ch): Supported 00:07:38.349 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.349 Directive Send (19h): Supported 00:07:38.349 Directive Receive (1Ah): Supported 00:07:38.349 Virtualization Management (1Ch): Supported 00:07:38.349 Doorbell Buffer Config (7Ch): Supported 00:07:38.349 Format NVM (80h): Supported LBA-Change 00:07:38.349 I/O Commands 00:07:38.349 ------------ 00:07:38.349 Flush (00h): Supported LBA-Change 00:07:38.349 Write (01h): Supported LBA-Change 00:07:38.349 Read (02h): Supported 00:07:38.349 Compare (05h): Supported 00:07:38.349 Write Zeroes (08h): Supported LBA-Change 00:07:38.349 Dataset Management (09h): Supported LBA-Change 00:07:38.349 Unknown (0Ch): Supported 00:07:38.349 Unknown (12h): Supported 00:07:38.349 Copy (19h): Supported LBA-Change 00:07:38.349 Unknown (1Dh): Supported LBA-Change 00:07:38.349 00:07:38.349 Error Log 00:07:38.349 ========= 00:07:38.349 00:07:38.349 Arbitration 00:07:38.349 =========== 00:07:38.349 Arbitration Burst: no limit 00:07:38.349 00:07:38.349 Power Management 00:07:38.349 ================ 00:07:38.349 Number of Power States: 1 00:07:38.349 Current Power State: Power State #0 00:07:38.349 Power State #0: 00:07:38.349 Max Power: 25.00 W 00:07:38.349 Non-Operational State: Operational 00:07:38.349 Entry Latency: 16 microseconds 00:07:38.349 Exit Latency: 4 microseconds 00:07:38.349 Relative Read Throughput: 0 00:07:38.349 Relative Read Latency: 0 00:07:38.349 Relative Write Throughput: 0 00:07:38.349 Relative Write Latency: 0 00:07:38.349 Idle Power: Not Reported 00:07:38.349 Active Power: Not Reported 00:07:38.349 Non-Operational Permissive Mode: Not Supported 00:07:38.349 00:07:38.349 Health Information 00:07:38.349 ================== 00:07:38.349 Critical Warnings: 00:07:38.349 Available Spare Space: OK 00:07:38.349 Temperature: OK 00:07:38.349 Device Reliability: OK 00:07:38.349 Read Only: No 00:07:38.349 Volatile Memory Backup: OK 00:07:38.349 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.349 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.349 Available Spare: 0% 00:07:38.349 Available Spare Threshold: 0% 00:07:38.349 Life Percentage Used: 0% 00:07:38.349 Data Units Read: 2219 00:07:38.349 Data Units Written: 2006 00:07:38.349 Host Read Commands: 114774 00:07:38.349 Host Write Commands: 113043 00:07:38.349 Controller Busy Time: 0 minutes 00:07:38.349 Power Cycles: 0 00:07:38.349 Power On Hours: 0 hours 00:07:38.349 Unsafe Shutdowns: 0 00:07:38.349 Unrecoverable Media Errors: 0 00:07:38.349 Lifetime Error Log Entries: 0 00:07:38.349 Warning Temperature Time: 0 minutes 00:07:38.349 Critical Temperature Time: 0 minutes 00:07:38.349 00:07:38.349 Number of Queues 00:07:38.349 ================ 00:07:38.349 Number of I/O Submission Queues: 64 00:07:38.349 Number of I/O Completion Queues: 64 00:07:38.349 00:07:38.349 ZNS Specific Controller Data 00:07:38.349 ============================ 00:07:38.349 Zone Append Size Limit: 0 00:07:38.349 00:07:38.349 00:07:38.349 Active Namespaces 00:07:38.349 ================= 00:07:38.349 Namespace ID:1 00:07:38.349 Error Recovery Timeout: Unlimited 00:07:38.349 Command Set Identifier: NVM (00h) 00:07:38.349 Deallocate: Supported 00:07:38.349 Deallocated/Unwritten Error: Supported 00:07:38.350 Deallocated Read Value: All 0x00 00:07:38.350 Deallocate in Write Zeroes: Not Supported 00:07:38.350 Deallocated Guard Field: 0xFFFF 00:07:38.350 Flush: Supported 00:07:38.350 Reservation: Not Supported 00:07:38.350 Namespace Sharing Capabilities: Private 00:07:38.350 Size (in LBAs): 1048576 (4GiB) 00:07:38.350 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.350 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.350 Thin Provisioning: Not Supported 00:07:38.350 Per-NS Atomic Units: No 00:07:38.350 Maximum Single Source Range Length: 128 00:07:38.350 Maximum Copy Length: 128 00:07:38.350 Maximum Source Range Count: 128 00:07:38.350 NGUID/EUI64 Never Reused: No 00:07:38.350 Namespace Write Protected: No 00:07:38.350 Number of LBA Formats: 8 00:07:38.350 Current LBA Format: LBA Format #04 00:07:38.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.350 00:07:38.350 NVM Specific Namespace Data 00:07:38.350 =========================== 00:07:38.350 Logical Block Storage Tag Mask: 0 00:07:38.350 Protection Information Capabilities: 00:07:38.350 16b Guard Protection Information Storage Tag Support: No 00:07:38.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.350 Storage Tag Check Read Support: No 00:07:38.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Namespace ID:2 00:07:38.350 Error Recovery Timeout: Unlimited 00:07:38.350 Command Set Identifier: NVM (00h) 00:07:38.350 Deallocate: Supported 00:07:38.350 Deallocated/Unwritten Error: Supported 00:07:38.350 Deallocated Read Value: All 0x00 00:07:38.350 Deallocate in Write Zeroes: Not Supported 00:07:38.350 Deallocated Guard Field: 0xFFFF 00:07:38.350 Flush: Supported 00:07:38.350 Reservation: Not Supported 00:07:38.350 Namespace Sharing Capabilities: Private 00:07:38.350 Size (in LBAs): 1048576 (4GiB) 00:07:38.350 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.350 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.350 Thin Provisioning: Not Supported 00:07:38.350 Per-NS Atomic Units: No 00:07:38.350 Maximum Single Source Range Length: 128 00:07:38.350 Maximum Copy Length: 128 00:07:38.350 Maximum Source Range Count: 128 00:07:38.350 NGUID/EUI64 Never Reused: No 00:07:38.350 Namespace Write Protected: No 00:07:38.350 Number of LBA Formats: 8 00:07:38.350 Current LBA Format: LBA Format #04 00:07:38.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.350 00:07:38.350 NVM Specific Namespace Data 00:07:38.350 =========================== 00:07:38.350 Logical Block Storage Tag Mask: 0 00:07:38.350 Protection Information Capabilities: 00:07:38.350 16b Guard Protection Information Storage Tag Support: No 00:07:38.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.350 Storage Tag Check Read Support: No 00:07:38.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Namespace ID:3 00:07:38.350 Error Recovery Timeout: Unlimited 00:07:38.350 Command Set Identifier: NVM (00h) 00:07:38.350 Deallocate: Supported 00:07:38.350 Deallocated/Unwritten Error: Supported 00:07:38.350 Deallocated Read Value: All 0x00 00:07:38.350 Deallocate in Write Zeroes: Not Supported 00:07:38.350 Deallocated Guard Field: 0xFFFF 00:07:38.350 Flush: Supported 00:07:38.350 Reservation: Not Supported 00:07:38.350 Namespace Sharing Capabilities: Private 00:07:38.350 Size (in LBAs): 1048576 (4GiB) 00:07:38.350 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.350 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.350 Thin Provisioning: Not Supported 00:07:38.350 Per-NS Atomic Units: No 00:07:38.350 Maximum Single Source Range Length: 128 00:07:38.350 Maximum Copy Length: 128 00:07:38.350 Maximum Source Range Count: 128 00:07:38.350 NGUID/EUI64 Never Reused: No 00:07:38.350 Namespace Write Protected: No 00:07:38.350 Number of LBA Formats: 8 00:07:38.350 Current LBA Format: LBA Format #04 00:07:38.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.350 00:07:38.350 NVM Specific Namespace Data 00:07:38.350 =========================== 00:07:38.350 Logical Block Storage Tag Mask: 0 00:07:38.350 Protection Information Capabilities: 00:07:38.350 16b Guard Protection Information Storage Tag Support: No 00:07:38.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.350 Storage Tag Check Read Support: No 00:07:38.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.350 03:58:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.350 03:58:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:38.350 ===================================================== 00:07:38.350 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:38.350 ===================================================== 00:07:38.350 Controller Capabilities/Features 00:07:38.350 ================================ 00:07:38.350 Vendor ID: 1b36 00:07:38.350 Subsystem Vendor ID: 1af4 00:07:38.350 Serial Number: 12343 00:07:38.350 Model Number: QEMU NVMe Ctrl 00:07:38.350 Firmware Version: 8.0.0 00:07:38.350 Recommended Arb Burst: 6 00:07:38.350 IEEE OUI Identifier: 00 54 52 00:07:38.350 Multi-path I/O 00:07:38.350 May have multiple subsystem ports: No 00:07:38.350 May have multiple controllers: Yes 00:07:38.350 Associated with SR-IOV VF: No 00:07:38.350 Max Data Transfer Size: 524288 00:07:38.350 Max Number of Namespaces: 256 00:07:38.350 Max Number of I/O Queues: 64 00:07:38.350 NVMe Specification Version (VS): 1.4 00:07:38.350 NVMe Specification Version (Identify): 1.4 00:07:38.350 Maximum Queue Entries: 2048 00:07:38.350 Contiguous Queues Required: Yes 00:07:38.350 Arbitration Mechanisms Supported 00:07:38.350 Weighted Round Robin: Not Supported 00:07:38.350 Vendor Specific: Not Supported 00:07:38.350 Reset Timeout: 7500 ms 00:07:38.350 Doorbell Stride: 4 bytes 00:07:38.350 NVM Subsystem Reset: Not Supported 00:07:38.350 Command Sets Supported 00:07:38.350 NVM Command Set: Supported 00:07:38.350 Boot Partition: Not Supported 00:07:38.350 Memory Page Size Minimum: 4096 bytes 00:07:38.350 Memory Page Size Maximum: 65536 bytes 00:07:38.350 Persistent Memory Region: Not Supported 00:07:38.350 Optional Asynchronous Events Supported 00:07:38.350 Namespace Attribute Notices: Supported 00:07:38.350 Firmware Activation Notices: Not Supported 00:07:38.350 ANA Change Notices: Not Supported 00:07:38.350 PLE Aggregate Log Change Notices: Not Supported 00:07:38.350 LBA Status Info Alert Notices: Not Supported 00:07:38.350 EGE Aggregate Log Change Notices: Not Supported 00:07:38.350 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.350 Zone Descriptor Change Notices: Not Supported 00:07:38.350 Discovery Log Change Notices: Not Supported 00:07:38.350 Controller Attributes 00:07:38.350 128-bit Host Identifier: Not Supported 00:07:38.351 Non-Operational Permissive Mode: Not Supported 00:07:38.351 NVM Sets: Not Supported 00:07:38.351 Read Recovery Levels: Not Supported 00:07:38.351 Endurance Groups: Supported 00:07:38.351 Predictable Latency Mode: Not Supported 00:07:38.351 Traffic Based Keep ALive: Not Supported 00:07:38.351 Namespace Granularity: Not Supported 00:07:38.351 SQ Associations: Not Supported 00:07:38.351 UUID List: Not Supported 00:07:38.351 Multi-Domain Subsystem: Not Supported 00:07:38.351 Fixed Capacity Management: Not Supported 00:07:38.351 Variable Capacity Management: Not Supported 00:07:38.351 Delete Endurance Group: Not Supported 00:07:38.351 Delete NVM Set: Not Supported 00:07:38.351 Extended LBA Formats Supported: Supported 00:07:38.351 Flexible Data Placement Supported: Supported 00:07:38.351 00:07:38.351 Controller Memory Buffer Support 00:07:38.351 ================================ 00:07:38.351 Supported: No 00:07:38.351 00:07:38.351 Persistent Memory Region Support 00:07:38.351 ================================ 00:07:38.351 Supported: No 00:07:38.351 00:07:38.351 Admin Command Set Attributes 00:07:38.351 ============================ 00:07:38.351 Security Send/Receive: Not Supported 00:07:38.351 Format NVM: Supported 00:07:38.351 Firmware Activate/Download: Not Supported 00:07:38.351 Namespace Management: Supported 00:07:38.351 Device Self-Test: Not Supported 00:07:38.351 Directives: Supported 00:07:38.351 NVMe-MI: Not Supported 00:07:38.351 Virtualization Management: Not Supported 00:07:38.351 Doorbell Buffer Config: Supported 00:07:38.351 Get LBA Status Capability: Not Supported 00:07:38.351 Command & Feature Lockdown Capability: Not Supported 00:07:38.351 Abort Command Limit: 4 00:07:38.351 Async Event Request Limit: 4 00:07:38.351 Number of Firmware Slots: N/A 00:07:38.351 Firmware Slot 1 Read-Only: N/A 00:07:38.351 Firmware Activation Without Reset: N/A 00:07:38.351 Multiple Update Detection Support: N/A 00:07:38.351 Firmware Update Granularity: No Information Provided 00:07:38.351 Per-Namespace SMART Log: Yes 00:07:38.351 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.351 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:38.351 Command Effects Log Page: Supported 00:07:38.351 Get Log Page Extended Data: Supported 00:07:38.351 Telemetry Log Pages: Not Supported 00:07:38.351 Persistent Event Log Pages: Not Supported 00:07:38.351 Supported Log Pages Log Page: May Support 00:07:38.351 Commands Supported & Effects Log Page: Not Supported 00:07:38.351 Feature Identifiers & Effects Log Page:May Support 00:07:38.351 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.351 Data Area 4 for Telemetry Log: Not Supported 00:07:38.351 Error Log Page Entries Supported: 1 00:07:38.351 Keep Alive: Not Supported 00:07:38.351 00:07:38.351 NVM Command Set Attributes 00:07:38.351 ========================== 00:07:38.351 Submission Queue Entry Size 00:07:38.351 Max: 64 00:07:38.351 Min: 64 00:07:38.351 Completion Queue Entry Size 00:07:38.351 Max: 16 00:07:38.351 Min: 16 00:07:38.351 Number of Namespaces: 256 00:07:38.351 Compare Command: Supported 00:07:38.351 Write Uncorrectable Command: Not Supported 00:07:38.351 Dataset Management Command: Supported 00:07:38.351 Write Zeroes Command: Supported 00:07:38.351 Set Features Save Field: Supported 00:07:38.351 Reservations: Not Supported 00:07:38.351 Timestamp: Supported 00:07:38.351 Copy: Supported 00:07:38.351 Volatile Write Cache: Present 00:07:38.351 Atomic Write Unit (Normal): 1 00:07:38.351 Atomic Write Unit (PFail): 1 00:07:38.351 Atomic Compare & Write Unit: 1 00:07:38.351 Fused Compare & Write: Not Supported 00:07:38.351 Scatter-Gather List 00:07:38.351 SGL Command Set: Supported 00:07:38.351 SGL Keyed: Not Supported 00:07:38.351 SGL Bit Bucket Descriptor: Not Supported 00:07:38.351 SGL Metadata Pointer: Not Supported 00:07:38.351 Oversized SGL: Not Supported 00:07:38.351 SGL Metadata Address: Not Supported 00:07:38.351 SGL Offset: Not Supported 00:07:38.351 Transport SGL Data Block: Not Supported 00:07:38.351 Replay Protected Memory Block: Not Supported 00:07:38.351 00:07:38.351 Firmware Slot Information 00:07:38.351 ========================= 00:07:38.351 Active slot: 1 00:07:38.351 Slot 1 Firmware Revision: 1.0 00:07:38.351 00:07:38.351 00:07:38.351 Commands Supported and Effects 00:07:38.351 ============================== 00:07:38.351 Admin Commands 00:07:38.351 -------------- 00:07:38.351 Delete I/O Submission Queue (00h): Supported 00:07:38.351 Create I/O Submission Queue (01h): Supported 00:07:38.351 Get Log Page (02h): Supported 00:07:38.351 Delete I/O Completion Queue (04h): Supported 00:07:38.351 Create I/O Completion Queue (05h): Supported 00:07:38.351 Identify (06h): Supported 00:07:38.351 Abort (08h): Supported 00:07:38.351 Set Features (09h): Supported 00:07:38.351 Get Features (0Ah): Supported 00:07:38.351 Asynchronous Event Request (0Ch): Supported 00:07:38.351 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.351 Directive Send (19h): Supported 00:07:38.351 Directive Receive (1Ah): Supported 00:07:38.351 Virtualization Management (1Ch): Supported 00:07:38.351 Doorbell Buffer Config (7Ch): Supported 00:07:38.351 Format NVM (80h): Supported LBA-Change 00:07:38.351 I/O Commands 00:07:38.351 ------------ 00:07:38.351 Flush (00h): Supported LBA-Change 00:07:38.351 Write (01h): Supported LBA-Change 00:07:38.351 Read (02h): Supported 00:07:38.351 Compare (05h): Supported 00:07:38.351 Write Zeroes (08h): Supported LBA-Change 00:07:38.351 Dataset Management (09h): Supported LBA-Change 00:07:38.351 Unknown (0Ch): Supported 00:07:38.351 Unknown (12h): Supported 00:07:38.351 Copy (19h): Supported LBA-Change 00:07:38.351 Unknown (1Dh): Supported LBA-Change 00:07:38.351 00:07:38.351 Error Log 00:07:38.351 ========= 00:07:38.351 00:07:38.351 Arbitration 00:07:38.351 =========== 00:07:38.351 Arbitration Burst: no limit 00:07:38.351 00:07:38.351 Power Management 00:07:38.351 ================ 00:07:38.351 Number of Power States: 1 00:07:38.351 Current Power State: Power State #0 00:07:38.351 Power State #0: 00:07:38.351 Max Power: 25.00 W 00:07:38.351 Non-Operational State: Operational 00:07:38.351 Entry Latency: 16 microseconds 00:07:38.351 Exit Latency: 4 microseconds 00:07:38.351 Relative Read Throughput: 0 00:07:38.351 Relative Read Latency: 0 00:07:38.351 Relative Write Throughput: 0 00:07:38.351 Relative Write Latency: 0 00:07:38.351 Idle Power: Not Reported 00:07:38.351 Active Power: Not Reported 00:07:38.351 Non-Operational Permissive Mode: Not Supported 00:07:38.351 00:07:38.351 Health Information 00:07:38.351 ================== 00:07:38.351 Critical Warnings: 00:07:38.351 Available Spare Space: OK 00:07:38.351 Temperature: OK 00:07:38.351 Device Reliability: OK 00:07:38.351 Read Only: No 00:07:38.351 Volatile Memory Backup: OK 00:07:38.351 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.351 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.351 Available Spare: 0% 00:07:38.351 Available Spare Threshold: 0% 00:07:38.351 Life Percentage Used: 0% 00:07:38.351 Data Units Read: 834 00:07:38.351 Data Units Written: 763 00:07:38.351 Host Read Commands: 39173 00:07:38.351 Host Write Commands: 38596 00:07:38.351 Controller Busy Time: 0 minutes 00:07:38.351 Power Cycles: 0 00:07:38.351 Power On Hours: 0 hours 00:07:38.351 Unsafe Shutdowns: 0 00:07:38.351 Unrecoverable Media Errors: 0 00:07:38.351 Lifetime Error Log Entries: 0 00:07:38.351 Warning Temperature Time: 0 minutes 00:07:38.351 Critical Temperature Time: 0 minutes 00:07:38.351 00:07:38.351 Number of Queues 00:07:38.351 ================ 00:07:38.351 Number of I/O Submission Queues: 64 00:07:38.351 Number of I/O Completion Queues: 64 00:07:38.351 00:07:38.351 ZNS Specific Controller Data 00:07:38.351 ============================ 00:07:38.351 Zone Append Size Limit: 0 00:07:38.351 00:07:38.351 00:07:38.351 Active Namespaces 00:07:38.351 ================= 00:07:38.351 Namespace ID:1 00:07:38.351 Error Recovery Timeout: Unlimited 00:07:38.351 Command Set Identifier: NVM (00h) 00:07:38.351 Deallocate: Supported 00:07:38.351 Deallocated/Unwritten Error: Supported 00:07:38.351 Deallocated Read Value: All 0x00 00:07:38.351 Deallocate in Write Zeroes: Not Supported 00:07:38.351 Deallocated Guard Field: 0xFFFF 00:07:38.351 Flush: Supported 00:07:38.351 Reservation: Not Supported 00:07:38.351 Namespace Sharing Capabilities: Multiple Controllers 00:07:38.351 Size (in LBAs): 262144 (1GiB) 00:07:38.351 Capacity (in LBAs): 262144 (1GiB) 00:07:38.351 Utilization (in LBAs): 262144 (1GiB) 00:07:38.351 Thin Provisioning: Not Supported 00:07:38.351 Per-NS Atomic Units: No 00:07:38.351 Maximum Single Source Range Length: 128 00:07:38.351 Maximum Copy Length: 128 00:07:38.351 Maximum Source Range Count: 128 00:07:38.351 NGUID/EUI64 Never Reused: No 00:07:38.351 Namespace Write Protected: No 00:07:38.351 Endurance group ID: 1 00:07:38.351 Number of LBA Formats: 8 00:07:38.351 Current LBA Format: LBA Format #04 00:07:38.351 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.351 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.351 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.352 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.352 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.352 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.352 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.352 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.352 00:07:38.352 Get Feature FDP: 00:07:38.352 ================ 00:07:38.352 Enabled: Yes 00:07:38.352 FDP configuration index: 0 00:07:38.352 00:07:38.352 FDP configurations log page 00:07:38.352 =========================== 00:07:38.352 Number of FDP configurations: 1 00:07:38.352 Version: 0 00:07:38.352 Size: 112 00:07:38.352 FDP Configuration Descriptor: 0 00:07:38.352 Descriptor Size: 96 00:07:38.352 Reclaim Group Identifier format: 2 00:07:38.352 FDP Volatile Write Cache: Not Present 00:07:38.352 FDP Configuration: Valid 00:07:38.352 Vendor Specific Size: 0 00:07:38.352 Number of Reclaim Groups: 2 00:07:38.352 Number of Recalim Unit Handles: 8 00:07:38.352 Max Placement Identifiers: 128 00:07:38.352 Number of Namespaces Suppprted: 256 00:07:38.352 Reclaim unit Nominal Size: 6000000 bytes 00:07:38.352 Estimated Reclaim Unit Time Limit: Not Reported 00:07:38.352 RUH Desc #000: RUH Type: Initially Isolated 00:07:38.352 RUH Desc #001: RUH Type: Initially Isolated 00:07:38.352 RUH Desc #002: RUH Type: Initially Isolated 00:07:38.352 RUH Desc #003: RUH Type: Initially Isolated 00:07:38.352 RUH Desc #004: RUH Type: Initially Isolated 00:07:38.352 RUH Desc #005: RUH Type: Initially Isolated 00:07:38.352 RUH Desc #006: RUH Type: Initially Isolated 00:07:38.352 RUH Desc #007: RUH Type: Initially Isolated 00:07:38.352 00:07:38.352 FDP reclaim unit handle usage log page 00:07:38.610 ====================================== 00:07:38.610 Number of Reclaim Unit Handles: 8 00:07:38.610 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:38.610 RUH Usage Desc #001: RUH Attributes: Unused 00:07:38.610 RUH Usage Desc #002: RUH Attributes: Unused 00:07:38.610 RUH Usage Desc #003: RUH Attributes: Unused 00:07:38.610 RUH Usage Desc #004: RUH Attributes: Unused 00:07:38.610 RUH Usage Desc #005: RUH Attributes: Unused 00:07:38.610 RUH Usage Desc #006: RUH Attributes: Unused 00:07:38.610 RUH Usage Desc #007: RUH Attributes: Unused 00:07:38.610 00:07:38.610 FDP statistics log page 00:07:38.610 ======================= 00:07:38.610 Host bytes with metadata written: 481402880 00:07:38.610 Media bytes with metadata written: 481447936 00:07:38.610 Media bytes erased: 0 00:07:38.610 00:07:38.610 FDP events log page 00:07:38.610 =================== 00:07:38.610 Number of FDP events: 0 00:07:38.610 00:07:38.610 NVM Specific Namespace Data 00:07:38.610 =========================== 00:07:38.610 Logical Block Storage Tag Mask: 0 00:07:38.610 Protection Information Capabilities: 00:07:38.610 16b Guard Protection Information Storage Tag Support: No 00:07:38.610 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.610 Storage Tag Check Read Support: No 00:07:38.610 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.610 00:07:38.610 real 0m1.157s 00:07:38.610 user 0m0.396s 00:07:38.610 sys 0m0.555s 00:07:38.610 03:58:31 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.610 03:58:31 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:38.610 ************************************ 00:07:38.610 END TEST nvme_identify 00:07:38.610 ************************************ 00:07:38.610 03:58:31 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:38.610 03:58:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.610 03:58:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.610 03:58:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.610 ************************************ 00:07:38.610 START TEST nvme_perf 00:07:38.610 ************************************ 00:07:38.610 03:58:31 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:38.610 03:58:31 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:39.986 Initializing NVMe Controllers 00:07:39.986 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.986 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.986 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.986 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.986 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:39.986 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:39.986 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:39.986 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:39.986 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:39.986 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:39.986 Initialization complete. Launching workers. 00:07:39.986 ======================================================== 00:07:39.986 Latency(us) 00:07:39.986 Device Information : IOPS MiB/s Average min max 00:07:39.986 PCIE (0000:00:13.0) NSID 1 from core 0: 18777.17 220.04 6826.37 5678.70 29417.44 00:07:39.986 PCIE (0000:00:10.0) NSID 1 from core 0: 18777.17 220.04 6815.66 5588.04 27903.06 00:07:39.986 PCIE (0000:00:11.0) NSID 1 from core 0: 18777.17 220.04 6805.81 5646.91 26112.23 00:07:39.986 PCIE (0000:00:12.0) NSID 1 from core 0: 18777.17 220.04 6795.26 5706.31 24781.47 00:07:39.986 PCIE (0000:00:12.0) NSID 2 from core 0: 18777.17 220.04 6784.77 5702.54 23063.16 00:07:39.986 PCIE (0000:00:12.0) NSID 3 from core 0: 18777.17 220.04 6774.13 5688.55 21383.19 00:07:39.986 ======================================================== 00:07:39.986 Total : 112663.01 1320.27 6800.33 5588.04 29417.44 00:07:39.986 00:07:39.986 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.986 ================================================================================= 00:07:39.986 1.00000% : 5822.622us 00:07:39.986 10.00000% : 5973.858us 00:07:39.986 25.00000% : 6150.302us 00:07:39.986 50.00000% : 6402.363us 00:07:39.986 75.00000% : 6704.837us 00:07:39.986 90.00000% : 7965.145us 00:07:39.986 95.00000% : 9427.102us 00:07:39.986 98.00000% : 11594.831us 00:07:39.986 99.00000% : 14518.745us 00:07:39.986 99.50000% : 23794.609us 00:07:39.986 99.90000% : 29037.489us 00:07:39.986 99.99000% : 29440.788us 00:07:39.986 99.99900% : 29440.788us 00:07:39.986 99.99990% : 29440.788us 00:07:39.986 99.99999% : 29440.788us 00:07:39.986 00:07:39.986 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.986 ================================================================================= 00:07:39.986 1.00000% : 5747.003us 00:07:39.986 10.00000% : 5923.446us 00:07:39.986 25.00000% : 6125.095us 00:07:39.986 50.00000% : 6427.569us 00:07:39.986 75.00000% : 6755.249us 00:07:39.986 90.00000% : 8015.557us 00:07:39.986 95.00000% : 9376.689us 00:07:39.986 98.00000% : 11746.068us 00:07:39.986 99.00000% : 13510.498us 00:07:39.986 99.50000% : 22282.240us 00:07:39.986 99.90000% : 27625.945us 00:07:39.986 99.99000% : 28029.243us 00:07:39.986 99.99900% : 28029.243us 00:07:39.986 99.99990% : 28029.243us 00:07:39.986 99.99999% : 28029.243us 00:07:39.986 00:07:39.986 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.986 ================================================================================= 00:07:39.986 1.00000% : 5822.622us 00:07:39.986 10.00000% : 5973.858us 00:07:39.986 25.00000% : 6150.302us 00:07:39.986 50.00000% : 6402.363us 00:07:39.986 75.00000% : 6704.837us 00:07:39.986 90.00000% : 8065.969us 00:07:39.986 95.00000% : 9427.102us 00:07:39.986 98.00000% : 11796.480us 00:07:39.986 99.00000% : 13611.323us 00:07:39.986 99.50000% : 20568.222us 00:07:39.986 99.90000% : 25710.277us 00:07:39.986 99.99000% : 26214.400us 00:07:39.986 99.99900% : 26214.400us 00:07:39.986 99.99990% : 26214.400us 00:07:39.986 99.99999% : 26214.400us 00:07:39.986 00:07:39.986 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.986 ================================================================================= 00:07:39.986 1.00000% : 5822.622us 00:07:39.986 10.00000% : 5973.858us 00:07:39.986 25.00000% : 6150.302us 00:07:39.986 50.00000% : 6402.363us 00:07:39.986 75.00000% : 6654.425us 00:07:39.986 90.00000% : 8065.969us 00:07:39.986 95.00000% : 9326.277us 00:07:39.986 98.00000% : 11443.594us 00:07:39.986 99.00000% : 13913.797us 00:07:39.986 99.50000% : 19156.677us 00:07:39.986 99.90000% : 24399.557us 00:07:39.986 99.99000% : 24802.855us 00:07:39.986 99.99900% : 24802.855us 00:07:39.986 99.99990% : 24802.855us 00:07:39.986 99.99999% : 24802.855us 00:07:39.986 00:07:39.986 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.986 ================================================================================= 00:07:39.986 1.00000% : 5822.622us 00:07:39.986 10.00000% : 5973.858us 00:07:39.986 25.00000% : 6150.302us 00:07:39.986 50.00000% : 6402.363us 00:07:39.986 75.00000% : 6654.425us 00:07:39.986 90.00000% : 8116.382us 00:07:39.986 95.00000% : 9427.102us 00:07:39.986 98.00000% : 11443.594us 00:07:39.986 99.00000% : 14014.622us 00:07:39.986 99.50000% : 17442.658us 00:07:39.986 99.90000% : 22685.538us 00:07:39.986 99.99000% : 23088.837us 00:07:39.986 99.99900% : 23088.837us 00:07:39.986 99.99990% : 23088.837us 00:07:39.986 99.99999% : 23088.837us 00:07:39.986 00:07:39.986 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.986 ================================================================================= 00:07:39.986 1.00000% : 5822.622us 00:07:39.986 10.00000% : 5973.858us 00:07:39.986 25.00000% : 6150.302us 00:07:39.986 50.00000% : 6402.363us 00:07:39.986 75.00000% : 6654.425us 00:07:39.986 90.00000% : 8015.557us 00:07:39.986 95.00000% : 9527.926us 00:07:39.986 98.00000% : 11544.418us 00:07:39.986 99.00000% : 14619.569us 00:07:39.986 99.50000% : 15930.289us 00:07:39.986 99.90000% : 20971.520us 00:07:39.986 99.99000% : 21374.818us 00:07:39.986 99.99900% : 21475.643us 00:07:39.986 99.99990% : 21475.643us 00:07:39.986 99.99999% : 21475.643us 00:07:39.986 00:07:39.986 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:39.986 ============================================================================== 00:07:39.986 Range in us Cumulative IO count 00:07:39.986 5671.385 - 5696.591: 0.0266% ( 5) 00:07:39.986 5696.591 - 5721.797: 0.0903% ( 12) 00:07:39.986 5721.797 - 5747.003: 0.2232% ( 25) 00:07:39.986 5747.003 - 5772.209: 0.3933% ( 32) 00:07:39.986 5772.209 - 5797.415: 0.7281% ( 63) 00:07:39.986 5797.415 - 5822.622: 1.4403% ( 134) 00:07:39.986 5822.622 - 5847.828: 2.3438% ( 170) 00:07:39.986 5847.828 - 5873.034: 3.6086% ( 238) 00:07:39.986 5873.034 - 5898.240: 5.1127% ( 283) 00:07:39.987 5898.240 - 5923.446: 6.8346% ( 324) 00:07:39.987 5923.446 - 5948.652: 8.7054% ( 352) 00:07:39.987 5948.652 - 5973.858: 10.7090% ( 377) 00:07:39.987 5973.858 - 5999.065: 12.8082% ( 395) 00:07:39.987 5999.065 - 6024.271: 14.9979% ( 412) 00:07:39.987 6024.271 - 6049.477: 17.3469% ( 442) 00:07:39.987 6049.477 - 6074.683: 19.5897% ( 422) 00:07:39.987 6074.683 - 6099.889: 22.0398% ( 461) 00:07:39.987 6099.889 - 6125.095: 24.4207% ( 448) 00:07:39.987 6125.095 - 6150.302: 26.7910% ( 446) 00:07:39.987 6150.302 - 6175.508: 29.1667% ( 447) 00:07:39.987 6175.508 - 6200.714: 31.5636% ( 451) 00:07:39.987 6200.714 - 6225.920: 33.9658% ( 452) 00:07:39.987 6225.920 - 6251.126: 36.3467% ( 448) 00:07:39.987 6251.126 - 6276.332: 38.7596% ( 454) 00:07:39.987 6276.332 - 6301.538: 41.1990% ( 459) 00:07:39.987 6301.538 - 6326.745: 43.6650% ( 464) 00:07:39.987 6326.745 - 6351.951: 46.1044% ( 459) 00:07:39.987 6351.951 - 6377.157: 48.5385% ( 458) 00:07:39.987 6377.157 - 6402.363: 50.9513% ( 454) 00:07:39.987 6402.363 - 6427.569: 53.4332% ( 467) 00:07:39.987 6427.569 - 6452.775: 55.9790% ( 479) 00:07:39.987 6452.775 - 6503.188: 60.9322% ( 932) 00:07:39.987 6503.188 - 6553.600: 65.7951% ( 915) 00:07:39.987 6553.600 - 6604.012: 70.5038% ( 886) 00:07:39.987 6604.012 - 6654.425: 74.6652% ( 783) 00:07:39.987 6654.425 - 6704.837: 77.8858% ( 606) 00:07:39.987 6704.837 - 6755.249: 80.2827% ( 451) 00:07:39.987 6755.249 - 6805.662: 81.8293% ( 291) 00:07:39.987 6805.662 - 6856.074: 82.9826% ( 217) 00:07:39.987 6856.074 - 6906.486: 83.8116% ( 156) 00:07:39.987 6906.486 - 6956.898: 84.4281% ( 116) 00:07:39.987 6956.898 - 7007.311: 84.8480% ( 79) 00:07:39.987 7007.311 - 7057.723: 85.2041% ( 67) 00:07:39.987 7057.723 - 7108.135: 85.5548% ( 66) 00:07:39.987 7108.135 - 7158.548: 85.8950% ( 64) 00:07:39.987 7158.548 - 7208.960: 86.2830% ( 73) 00:07:39.987 7208.960 - 7259.372: 86.6125% ( 62) 00:07:39.987 7259.372 - 7309.785: 86.9207% ( 58) 00:07:39.987 7309.785 - 7360.197: 87.1918% ( 51) 00:07:39.987 7360.197 - 7410.609: 87.4841% ( 55) 00:07:39.987 7410.609 - 7461.022: 87.7764% ( 55) 00:07:39.987 7461.022 - 7511.434: 88.0740% ( 56) 00:07:39.987 7511.434 - 7561.846: 88.3610% ( 54) 00:07:39.987 7561.846 - 7612.258: 88.6373% ( 52) 00:07:39.987 7612.258 - 7662.671: 88.8977% ( 49) 00:07:39.987 7662.671 - 7713.083: 89.1316% ( 44) 00:07:39.987 7713.083 - 7763.495: 89.3495% ( 41) 00:07:39.987 7763.495 - 7813.908: 89.5727% ( 42) 00:07:39.987 7813.908 - 7864.320: 89.7906% ( 41) 00:07:39.987 7864.320 - 7914.732: 89.9979% ( 39) 00:07:39.987 7914.732 - 7965.145: 90.2636% ( 50) 00:07:39.987 7965.145 - 8015.557: 90.5028% ( 45) 00:07:39.987 8015.557 - 8065.969: 90.7366% ( 44) 00:07:39.987 8065.969 - 8116.382: 90.9705% ( 44) 00:07:39.987 8116.382 - 8166.794: 91.2149% ( 46) 00:07:39.987 8166.794 - 8217.206: 91.4275% ( 40) 00:07:39.987 8217.206 - 8267.618: 91.6401% ( 40) 00:07:39.987 8267.618 - 8318.031: 91.8367% ( 37) 00:07:39.987 8318.031 - 8368.443: 91.9802% ( 27) 00:07:39.987 8368.443 - 8418.855: 92.1184% ( 26) 00:07:39.987 8418.855 - 8469.268: 92.2619% ( 27) 00:07:39.987 8469.268 - 8519.680: 92.3948% ( 25) 00:07:39.987 8519.680 - 8570.092: 92.5861% ( 36) 00:07:39.987 8570.092 - 8620.505: 92.7296% ( 27) 00:07:39.987 8620.505 - 8670.917: 92.8890% ( 30) 00:07:39.987 8670.917 - 8721.329: 93.0378% ( 28) 00:07:39.987 8721.329 - 8771.742: 93.1707% ( 25) 00:07:39.987 8771.742 - 8822.154: 93.3142% ( 27) 00:07:39.987 8822.154 - 8872.566: 93.4418% ( 24) 00:07:39.987 8872.566 - 8922.978: 93.5534% ( 21) 00:07:39.987 8922.978 - 8973.391: 93.7394% ( 35) 00:07:39.987 8973.391 - 9023.803: 93.8988% ( 30) 00:07:39.987 9023.803 - 9074.215: 94.0476% ( 28) 00:07:39.987 9074.215 - 9124.628: 94.1858% ( 26) 00:07:39.987 9124.628 - 9175.040: 94.3080% ( 23) 00:07:39.987 9175.040 - 9225.452: 94.4462% ( 26) 00:07:39.987 9225.452 - 9275.865: 94.6003% ( 29) 00:07:39.987 9275.865 - 9326.277: 94.7651% ( 31) 00:07:39.987 9326.277 - 9376.689: 94.9352% ( 32) 00:07:39.987 9376.689 - 9427.102: 95.0999% ( 31) 00:07:39.987 9427.102 - 9477.514: 95.2806% ( 34) 00:07:39.987 9477.514 - 9527.926: 95.4401% ( 30) 00:07:39.987 9527.926 - 9578.338: 95.5729% ( 25) 00:07:39.987 9578.338 - 9628.751: 95.7111% ( 26) 00:07:39.987 9628.751 - 9679.163: 95.8440% ( 25) 00:07:39.987 9679.163 - 9729.575: 95.9768% ( 25) 00:07:39.987 9729.575 - 9779.988: 96.1150% ( 26) 00:07:39.987 9779.988 - 9830.400: 96.2479% ( 25) 00:07:39.987 9830.400 - 9880.812: 96.3382% ( 17) 00:07:39.987 9880.812 - 9931.225: 96.4073% ( 13) 00:07:39.987 9931.225 - 9981.637: 96.4870% ( 15) 00:07:39.987 9981.637 - 10032.049: 96.5668% ( 15) 00:07:39.987 10032.049 - 10082.462: 96.6465% ( 15) 00:07:39.987 10082.462 - 10132.874: 96.7209% ( 14) 00:07:39.987 10132.874 - 10183.286: 96.8006% ( 15) 00:07:39.987 10183.286 - 10233.698: 96.8591% ( 11) 00:07:39.987 10233.698 - 10284.111: 96.9175% ( 11) 00:07:39.987 10284.111 - 10334.523: 96.9866% ( 13) 00:07:39.987 10334.523 - 10384.935: 97.0504% ( 12) 00:07:39.987 10384.935 - 10435.348: 97.0982% ( 9) 00:07:39.987 10435.348 - 10485.760: 97.1460% ( 9) 00:07:39.987 10485.760 - 10536.172: 97.1992% ( 10) 00:07:39.987 10536.172 - 10586.585: 97.2470% ( 9) 00:07:39.987 10586.585 - 10636.997: 97.2895% ( 8) 00:07:39.987 10636.997 - 10687.409: 97.3427% ( 10) 00:07:39.987 10687.409 - 10737.822: 97.3746% ( 6) 00:07:39.987 10737.822 - 10788.234: 97.4065% ( 6) 00:07:39.987 10788.234 - 10838.646: 97.4330% ( 5) 00:07:39.987 10838.646 - 10889.058: 97.4596% ( 5) 00:07:39.987 10889.058 - 10939.471: 97.4809% ( 4) 00:07:39.987 10939.471 - 10989.883: 97.5181% ( 7) 00:07:39.987 10989.883 - 11040.295: 97.5925% ( 14) 00:07:39.987 11040.295 - 11090.708: 97.6350% ( 8) 00:07:39.987 11090.708 - 11141.120: 97.6881% ( 10) 00:07:39.987 11141.120 - 11191.532: 97.7307% ( 8) 00:07:39.987 11191.532 - 11241.945: 97.7625% ( 6) 00:07:39.987 11241.945 - 11292.357: 97.7997% ( 7) 00:07:39.987 11292.357 - 11342.769: 97.8263% ( 5) 00:07:39.987 11342.769 - 11393.182: 97.8635% ( 7) 00:07:39.987 11393.182 - 11443.594: 97.9007% ( 7) 00:07:39.987 11443.594 - 11494.006: 97.9326% ( 6) 00:07:39.987 11494.006 - 11544.418: 97.9698% ( 7) 00:07:39.987 11544.418 - 11594.831: 98.0123% ( 8) 00:07:39.987 11594.831 - 11645.243: 98.0442% ( 6) 00:07:39.987 11645.243 - 11695.655: 98.0867% ( 8) 00:07:39.987 11695.655 - 11746.068: 98.1239% ( 7) 00:07:39.987 11746.068 - 11796.480: 98.1611% ( 7) 00:07:39.987 11796.480 - 11846.892: 98.2037% ( 8) 00:07:39.987 11846.892 - 11897.305: 98.2302% ( 5) 00:07:39.987 11897.305 - 11947.717: 98.2674% ( 7) 00:07:39.987 11947.717 - 11998.129: 98.2834% ( 3) 00:07:39.987 11998.129 - 12048.542: 98.3099% ( 5) 00:07:39.987 12048.542 - 12098.954: 98.3418% ( 6) 00:07:39.987 12098.954 - 12149.366: 98.3684% ( 5) 00:07:39.987 12149.366 - 12199.778: 98.4003% ( 6) 00:07:39.987 12199.778 - 12250.191: 98.4322% ( 6) 00:07:39.987 12250.191 - 12300.603: 98.4588% ( 5) 00:07:39.987 12300.603 - 12351.015: 98.4853% ( 5) 00:07:39.987 12351.015 - 12401.428: 98.5119% ( 5) 00:07:39.987 12401.428 - 12451.840: 98.5597% ( 9) 00:07:39.987 12451.840 - 12502.252: 98.5969% ( 7) 00:07:39.987 12502.252 - 12552.665: 98.6182% ( 4) 00:07:39.987 12552.665 - 12603.077: 98.6448% ( 5) 00:07:39.987 12603.077 - 12653.489: 98.6660% ( 4) 00:07:39.987 12653.489 - 12703.902: 98.6820% ( 3) 00:07:39.987 12703.902 - 12754.314: 98.7032% ( 4) 00:07:39.987 12754.314 - 12804.726: 98.7351% ( 6) 00:07:39.987 12804.726 - 12855.138: 98.7564% ( 4) 00:07:39.987 12855.138 - 12905.551: 98.7883% ( 6) 00:07:39.987 12905.551 - 13006.375: 98.8255% ( 7) 00:07:39.987 13006.375 - 13107.200: 98.8467% ( 4) 00:07:39.987 13107.200 - 13208.025: 98.8680% ( 4) 00:07:39.987 13208.025 - 13308.849: 98.8946% ( 5) 00:07:39.987 13308.849 - 13409.674: 98.9211% ( 5) 00:07:39.987 13409.674 - 13510.498: 98.9477% ( 5) 00:07:39.987 13510.498 - 13611.323: 98.9796% ( 6) 00:07:39.987 14317.095 - 14417.920: 98.9955% ( 3) 00:07:39.987 14417.920 - 14518.745: 99.0115% ( 3) 00:07:39.987 14518.745 - 14619.569: 99.0221% ( 2) 00:07:39.987 14619.569 - 14720.394: 99.0381% ( 3) 00:07:39.987 14720.394 - 14821.218: 99.0593% ( 4) 00:07:39.987 14821.218 - 14922.043: 99.0806% ( 4) 00:07:39.987 14922.043 - 15022.868: 99.1018% ( 4) 00:07:39.987 15022.868 - 15123.692: 99.1178% ( 3) 00:07:39.987 15123.692 - 15224.517: 99.1390% ( 4) 00:07:39.987 15224.517 - 15325.342: 99.1603% ( 4) 00:07:39.987 15325.342 - 15426.166: 99.1815% ( 4) 00:07:39.987 15426.166 - 15526.991: 99.2028% ( 4) 00:07:39.987 15526.991 - 15627.815: 99.2188% ( 3) 00:07:39.987 15627.815 - 15728.640: 99.2347% ( 3) 00:07:39.987 15728.640 - 15829.465: 99.2560% ( 4) 00:07:39.987 15829.465 - 15930.289: 99.2772% ( 4) 00:07:39.987 15930.289 - 16031.114: 99.2985% ( 4) 00:07:39.987 16031.114 - 16131.938: 99.3144% ( 3) 00:07:39.987 16131.938 - 16232.763: 99.3197% ( 1) 00:07:39.987 22584.714 - 22685.538: 99.3250% ( 1) 00:07:39.987 22685.538 - 22786.363: 99.3304% ( 1) 00:07:39.987 22786.363 - 22887.188: 99.3410% ( 2) 00:07:39.987 22887.188 - 22988.012: 99.3516% ( 2) 00:07:39.987 22988.012 - 23088.837: 99.3729% ( 4) 00:07:39.987 23088.837 - 23189.662: 99.3888% ( 3) 00:07:39.987 23189.662 - 23290.486: 99.4101% ( 4) 00:07:39.987 23290.486 - 23391.311: 99.4313% ( 4) 00:07:39.987 23391.311 - 23492.135: 99.4526% ( 4) 00:07:39.987 23492.135 - 23592.960: 99.4739% ( 4) 00:07:39.987 23592.960 - 23693.785: 99.4951% ( 4) 00:07:39.987 23693.785 - 23794.609: 99.5164% ( 4) 00:07:39.987 23794.609 - 23895.434: 99.5376% ( 4) 00:07:39.987 23895.434 - 23996.258: 99.5536% ( 3) 00:07:39.987 23996.258 - 24097.083: 99.5748% ( 4) 00:07:39.987 24097.083 - 24197.908: 99.6014% ( 5) 00:07:39.987 24197.908 - 24298.732: 99.6227% ( 4) 00:07:39.988 24298.732 - 24399.557: 99.6492% ( 5) 00:07:39.988 24399.557 - 24500.382: 99.6599% ( 2) 00:07:39.988 27827.594 - 28029.243: 99.7024% ( 8) 00:07:39.988 28029.243 - 28230.892: 99.7502% ( 9) 00:07:39.988 28230.892 - 28432.542: 99.7927% ( 8) 00:07:39.988 28432.542 - 28634.191: 99.8352% ( 8) 00:07:39.988 28634.191 - 28835.840: 99.8778% ( 8) 00:07:39.988 28835.840 - 29037.489: 99.9150% ( 7) 00:07:39.988 29037.489 - 29239.138: 99.9628% ( 9) 00:07:39.988 29239.138 - 29440.788: 100.0000% ( 7) 00:07:39.988 00:07:39.988 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:39.988 ============================================================================== 00:07:39.988 Range in us Cumulative IO count 00:07:39.988 5570.560 - 5595.766: 0.0106% ( 2) 00:07:39.988 5595.766 - 5620.972: 0.0319% ( 4) 00:07:39.988 5620.972 - 5646.178: 0.0903% ( 11) 00:07:39.988 5646.178 - 5671.385: 0.1860% ( 18) 00:07:39.988 5671.385 - 5696.591: 0.4039% ( 41) 00:07:39.988 5696.591 - 5721.797: 0.7440% ( 64) 00:07:39.988 5721.797 - 5747.003: 1.3552% ( 115) 00:07:39.988 5747.003 - 5772.209: 2.3065% ( 179) 00:07:39.988 5772.209 - 5797.415: 3.4439% ( 214) 00:07:39.988 5797.415 - 5822.622: 4.6397% ( 225) 00:07:39.988 5822.622 - 5847.828: 6.0852% ( 272) 00:07:39.988 5847.828 - 5873.034: 7.7434% ( 312) 00:07:39.988 5873.034 - 5898.240: 9.3856% ( 309) 00:07:39.988 5898.240 - 5923.446: 11.1341% ( 329) 00:07:39.988 5923.446 - 5948.652: 13.0208% ( 355) 00:07:39.988 5948.652 - 5973.858: 14.7906% ( 333) 00:07:39.988 5973.858 - 5999.065: 16.8686% ( 391) 00:07:39.988 5999.065 - 6024.271: 18.6915% ( 343) 00:07:39.988 6024.271 - 6049.477: 20.7430% ( 386) 00:07:39.988 6049.477 - 6074.683: 22.7732% ( 382) 00:07:39.988 6074.683 - 6099.889: 24.8459% ( 390) 00:07:39.988 6099.889 - 6125.095: 26.8335% ( 374) 00:07:39.988 6125.095 - 6150.302: 28.9435% ( 397) 00:07:39.988 6150.302 - 6175.508: 30.9524% ( 378) 00:07:39.988 6175.508 - 6200.714: 33.0038% ( 386) 00:07:39.988 6200.714 - 6225.920: 35.0181% ( 379) 00:07:39.988 6225.920 - 6251.126: 37.1439% ( 400) 00:07:39.988 6251.126 - 6276.332: 39.0997% ( 368) 00:07:39.988 6276.332 - 6301.538: 41.1777% ( 391) 00:07:39.988 6301.538 - 6326.745: 43.2345% ( 387) 00:07:39.988 6326.745 - 6351.951: 45.3338% ( 395) 00:07:39.988 6351.951 - 6377.157: 47.3586% ( 381) 00:07:39.988 6377.157 - 6402.363: 49.5536% ( 413) 00:07:39.988 6402.363 - 6427.569: 51.7060% ( 405) 00:07:39.988 6427.569 - 6452.775: 53.7628% ( 387) 00:07:39.988 6452.775 - 6503.188: 58.0304% ( 803) 00:07:39.988 6503.188 - 6553.600: 62.2715% ( 798) 00:07:39.988 6553.600 - 6604.012: 66.4488% ( 786) 00:07:39.988 6604.012 - 6654.425: 70.6048% ( 782) 00:07:39.988 6654.425 - 6704.837: 74.3994% ( 714) 00:07:39.988 6704.837 - 6755.249: 77.6998% ( 621) 00:07:39.988 6755.249 - 6805.662: 80.0223% ( 437) 00:07:39.988 6805.662 - 6856.074: 81.8134% ( 337) 00:07:39.988 6856.074 - 6906.486: 82.9666% ( 217) 00:07:39.988 6906.486 - 6956.898: 83.8489% ( 166) 00:07:39.988 6956.898 - 7007.311: 84.4600% ( 115) 00:07:39.988 7007.311 - 7057.723: 84.9809% ( 98) 00:07:39.988 7057.723 - 7108.135: 85.3529% ( 70) 00:07:39.988 7108.135 - 7158.548: 85.6983% ( 65) 00:07:39.988 7158.548 - 7208.960: 86.0013% ( 57) 00:07:39.988 7208.960 - 7259.372: 86.3361% ( 63) 00:07:39.988 7259.372 - 7309.785: 86.6497% ( 59) 00:07:39.988 7309.785 - 7360.197: 86.9526% ( 57) 00:07:39.988 7360.197 - 7410.609: 87.1864% ( 44) 00:07:39.988 7410.609 - 7461.022: 87.4097% ( 42) 00:07:39.988 7461.022 - 7511.434: 87.6169% ( 39) 00:07:39.988 7511.434 - 7561.846: 87.8880% ( 51) 00:07:39.988 7561.846 - 7612.258: 88.1218% ( 44) 00:07:39.988 7612.258 - 7662.671: 88.3503% ( 43) 00:07:39.988 7662.671 - 7713.083: 88.6426% ( 55) 00:07:39.988 7713.083 - 7763.495: 88.9296% ( 54) 00:07:39.988 7763.495 - 7813.908: 89.1688% ( 45) 00:07:39.988 7813.908 - 7864.320: 89.4664% ( 56) 00:07:39.988 7864.320 - 7914.732: 89.6949% ( 43) 00:07:39.988 7914.732 - 7965.145: 89.9341% ( 45) 00:07:39.988 7965.145 - 8015.557: 90.1945% ( 49) 00:07:39.988 8015.557 - 8065.969: 90.4230% ( 43) 00:07:39.988 8065.969 - 8116.382: 90.6835% ( 49) 00:07:39.988 8116.382 - 8166.794: 90.9439% ( 49) 00:07:39.988 8166.794 - 8217.206: 91.1618% ( 41) 00:07:39.988 8217.206 - 8267.618: 91.3903% ( 43) 00:07:39.988 8267.618 - 8318.031: 91.6029% ( 40) 00:07:39.988 8318.031 - 8368.443: 91.8261% ( 42) 00:07:39.988 8368.443 - 8418.855: 92.0334% ( 39) 00:07:39.988 8418.855 - 8469.268: 92.2353% ( 38) 00:07:39.988 8469.268 - 8519.680: 92.4213% ( 35) 00:07:39.988 8519.680 - 8570.092: 92.5595% ( 26) 00:07:39.988 8570.092 - 8620.505: 92.7455% ( 35) 00:07:39.988 8620.505 - 8670.917: 92.8997% ( 29) 00:07:39.988 8670.917 - 8721.329: 93.0538% ( 29) 00:07:39.988 8721.329 - 8771.742: 93.2132% ( 30) 00:07:39.988 8771.742 - 8822.154: 93.3408% ( 24) 00:07:39.988 8822.154 - 8872.566: 93.5002% ( 30) 00:07:39.988 8872.566 - 8922.978: 93.6543% ( 29) 00:07:39.988 8922.978 - 8973.391: 93.8350% ( 34) 00:07:39.988 8973.391 - 9023.803: 93.9945% ( 30) 00:07:39.988 9023.803 - 9074.215: 94.1592% ( 31) 00:07:39.988 9074.215 - 9124.628: 94.3399% ( 34) 00:07:39.988 9124.628 - 9175.040: 94.4940% ( 29) 00:07:39.988 9175.040 - 9225.452: 94.6375% ( 27) 00:07:39.988 9225.452 - 9275.865: 94.7651% ( 24) 00:07:39.988 9275.865 - 9326.277: 94.8714% ( 20) 00:07:39.988 9326.277 - 9376.689: 95.0096% ( 26) 00:07:39.988 9376.689 - 9427.102: 95.1159% ( 20) 00:07:39.988 9427.102 - 9477.514: 95.2912% ( 33) 00:07:39.988 9477.514 - 9527.926: 95.4135% ( 23) 00:07:39.988 9527.926 - 9578.338: 95.5782% ( 31) 00:07:39.988 9578.338 - 9628.751: 95.7217% ( 27) 00:07:39.988 9628.751 - 9679.163: 95.8493% ( 24) 00:07:39.988 9679.163 - 9729.575: 95.9768% ( 24) 00:07:39.988 9729.575 - 9779.988: 96.0991% ( 23) 00:07:39.988 9779.988 - 9830.400: 96.2585% ( 30) 00:07:39.988 9830.400 - 9880.812: 96.3914% ( 25) 00:07:39.988 9880.812 - 9931.225: 96.5030% ( 21) 00:07:39.988 9931.225 - 9981.637: 96.6252% ( 23) 00:07:39.988 9981.637 - 10032.049: 96.7102% ( 16) 00:07:39.988 10032.049 - 10082.462: 96.8059% ( 18) 00:07:39.988 10082.462 - 10132.874: 96.8963% ( 17) 00:07:39.988 10132.874 - 10183.286: 96.9813% ( 16) 00:07:39.988 10183.286 - 10233.698: 97.0610% ( 15) 00:07:39.988 10233.698 - 10284.111: 97.1142% ( 10) 00:07:39.988 10284.111 - 10334.523: 97.1832% ( 13) 00:07:39.988 10334.523 - 10384.935: 97.2098% ( 5) 00:07:39.988 10384.935 - 10435.348: 97.2683% ( 11) 00:07:39.988 10435.348 - 10485.760: 97.3214% ( 10) 00:07:39.988 10485.760 - 10536.172: 97.3586% ( 7) 00:07:39.988 10536.172 - 10586.585: 97.4118% ( 10) 00:07:39.988 10586.585 - 10636.997: 97.4490% ( 7) 00:07:39.988 10636.997 - 10687.409: 97.4968% ( 9) 00:07:39.988 10687.409 - 10737.822: 97.5500% ( 10) 00:07:39.988 10737.822 - 10788.234: 97.5978% ( 9) 00:07:39.988 10788.234 - 10838.646: 97.6456% ( 9) 00:07:39.988 10838.646 - 10889.058: 97.6669% ( 4) 00:07:39.988 10889.058 - 10939.471: 97.6881% ( 4) 00:07:39.988 10939.471 - 10989.883: 97.7147% ( 5) 00:07:39.988 10989.883 - 11040.295: 97.7519% ( 7) 00:07:39.988 11040.295 - 11090.708: 97.7679% ( 3) 00:07:39.988 11090.708 - 11141.120: 97.8051% ( 7) 00:07:39.988 11141.120 - 11191.532: 97.8263% ( 4) 00:07:39.988 11191.532 - 11241.945: 97.8476% ( 4) 00:07:39.988 11241.945 - 11292.357: 97.8688% ( 4) 00:07:39.988 11342.769 - 11393.182: 97.8901% ( 4) 00:07:39.988 11393.182 - 11443.594: 97.9167% ( 5) 00:07:39.988 11443.594 - 11494.006: 97.9273% ( 2) 00:07:39.988 11494.006 - 11544.418: 97.9432% ( 3) 00:07:39.988 11544.418 - 11594.831: 97.9486% ( 1) 00:07:39.988 11594.831 - 11645.243: 97.9645% ( 3) 00:07:39.988 11645.243 - 11695.655: 97.9858% ( 4) 00:07:39.988 11695.655 - 11746.068: 98.0123% ( 5) 00:07:39.988 11746.068 - 11796.480: 98.0336% ( 4) 00:07:39.988 11796.480 - 11846.892: 98.0495% ( 3) 00:07:39.988 11846.892 - 11897.305: 98.0761% ( 5) 00:07:39.988 11897.305 - 11947.717: 98.1027% ( 5) 00:07:39.988 11947.717 - 11998.129: 98.1293% ( 5) 00:07:39.988 11998.129 - 12048.542: 98.1771% ( 9) 00:07:39.988 12048.542 - 12098.954: 98.2196% ( 8) 00:07:39.988 12098.954 - 12149.366: 98.2302% ( 2) 00:07:39.988 12149.366 - 12199.778: 98.2515% ( 4) 00:07:39.988 12199.778 - 12250.191: 98.2887% ( 7) 00:07:39.988 12250.191 - 12300.603: 98.3153% ( 5) 00:07:39.988 12300.603 - 12351.015: 98.3472% ( 6) 00:07:39.988 12351.015 - 12401.428: 98.3684% ( 4) 00:07:39.988 12401.428 - 12451.840: 98.4003% ( 6) 00:07:39.988 12451.840 - 12502.252: 98.4269% ( 5) 00:07:39.988 12502.252 - 12552.665: 98.4641% ( 7) 00:07:39.988 12552.665 - 12603.077: 98.4853% ( 4) 00:07:39.988 12603.077 - 12653.489: 98.5119% ( 5) 00:07:39.988 12653.489 - 12703.902: 98.5385% ( 5) 00:07:39.988 12703.902 - 12754.314: 98.5651% ( 5) 00:07:39.988 12754.314 - 12804.726: 98.5916% ( 5) 00:07:39.988 12804.726 - 12855.138: 98.6182% ( 5) 00:07:39.988 12855.138 - 12905.551: 98.6501% ( 6) 00:07:39.988 12905.551 - 13006.375: 98.7085% ( 11) 00:07:39.988 13006.375 - 13107.200: 98.7723% ( 12) 00:07:39.988 13107.200 - 13208.025: 98.8148% ( 8) 00:07:39.988 13208.025 - 13308.849: 98.8680% ( 10) 00:07:39.988 13308.849 - 13409.674: 98.9477% ( 15) 00:07:39.988 13409.674 - 13510.498: 99.0062% ( 11) 00:07:39.988 13510.498 - 13611.323: 99.0327% ( 5) 00:07:39.988 13611.323 - 13712.148: 99.0593% ( 5) 00:07:39.988 13712.148 - 13812.972: 99.0753% ( 3) 00:07:39.988 13812.972 - 13913.797: 99.0912% ( 3) 00:07:39.988 13913.797 - 14014.622: 99.1071% ( 3) 00:07:39.988 14014.622 - 14115.446: 99.1178% ( 2) 00:07:39.988 14115.446 - 14216.271: 99.1337% ( 3) 00:07:39.988 14216.271 - 14317.095: 99.1497% ( 3) 00:07:39.988 14317.095 - 14417.920: 99.1709% ( 4) 00:07:39.988 14417.920 - 14518.745: 99.1869% ( 3) 00:07:39.988 14518.745 - 14619.569: 99.2028% ( 3) 00:07:39.988 14619.569 - 14720.394: 99.2241% ( 4) 00:07:39.988 14720.394 - 14821.218: 99.2400% ( 3) 00:07:39.989 14821.218 - 14922.043: 99.2560% ( 3) 00:07:39.989 14922.043 - 15022.868: 99.2719% ( 3) 00:07:39.989 15022.868 - 15123.692: 99.2878% ( 3) 00:07:39.989 15123.692 - 15224.517: 99.3038% ( 3) 00:07:39.989 15224.517 - 15325.342: 99.3197% ( 3) 00:07:39.989 21273.994 - 21374.818: 99.3250% ( 1) 00:07:39.989 21374.818 - 21475.643: 99.3516% ( 5) 00:07:39.989 21475.643 - 21576.468: 99.3622% ( 2) 00:07:39.989 21576.468 - 21677.292: 99.3782% ( 3) 00:07:39.989 21677.292 - 21778.117: 99.3994% ( 4) 00:07:39.989 21778.117 - 21878.942: 99.4207% ( 4) 00:07:39.989 21878.942 - 21979.766: 99.4420% ( 4) 00:07:39.989 21979.766 - 22080.591: 99.4579% ( 3) 00:07:39.989 22080.591 - 22181.415: 99.4792% ( 4) 00:07:39.989 22181.415 - 22282.240: 99.5004% ( 4) 00:07:39.989 22282.240 - 22383.065: 99.5164% ( 3) 00:07:39.989 22383.065 - 22483.889: 99.5429% ( 5) 00:07:39.989 22483.889 - 22584.714: 99.5589% ( 3) 00:07:39.989 22584.714 - 22685.538: 99.5855% ( 5) 00:07:39.989 22685.538 - 22786.363: 99.6067% ( 4) 00:07:39.989 22786.363 - 22887.188: 99.6227% ( 3) 00:07:39.989 22887.188 - 22988.012: 99.6439% ( 4) 00:07:39.989 22988.012 - 23088.837: 99.6599% ( 3) 00:07:39.989 26214.400 - 26416.049: 99.7077% ( 9) 00:07:39.989 26416.049 - 26617.698: 99.7449% ( 7) 00:07:39.989 26617.698 - 26819.348: 99.7874% ( 8) 00:07:39.989 26819.348 - 27020.997: 99.8246% ( 7) 00:07:39.989 27020.997 - 27222.646: 99.8618% ( 7) 00:07:39.989 27222.646 - 27424.295: 99.8990% ( 7) 00:07:39.989 27424.295 - 27625.945: 99.9362% ( 7) 00:07:39.989 27625.945 - 27827.594: 99.9841% ( 9) 00:07:39.989 27827.594 - 28029.243: 100.0000% ( 3) 00:07:39.989 00:07:39.989 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:39.989 ============================================================================== 00:07:39.989 Range in us Cumulative IO count 00:07:39.989 5646.178 - 5671.385: 0.0106% ( 2) 00:07:39.989 5671.385 - 5696.591: 0.0638% ( 10) 00:07:39.989 5696.591 - 5721.797: 0.1169% ( 10) 00:07:39.989 5721.797 - 5747.003: 0.2073% ( 17) 00:07:39.989 5747.003 - 5772.209: 0.4411% ( 44) 00:07:39.989 5772.209 - 5797.415: 0.8238% ( 72) 00:07:39.989 5797.415 - 5822.622: 1.3977% ( 108) 00:07:39.989 5822.622 - 5847.828: 2.2428% ( 159) 00:07:39.989 5847.828 - 5873.034: 3.3695% ( 212) 00:07:39.989 5873.034 - 5898.240: 4.9851% ( 304) 00:07:39.989 5898.240 - 5923.446: 6.8559% ( 352) 00:07:39.989 5923.446 - 5948.652: 8.8010% ( 366) 00:07:39.989 5948.652 - 5973.858: 10.9694% ( 408) 00:07:39.989 5973.858 - 5999.065: 13.3131% ( 441) 00:07:39.989 5999.065 - 6024.271: 15.3008% ( 374) 00:07:39.989 6024.271 - 6049.477: 17.2247% ( 362) 00:07:39.989 6049.477 - 6074.683: 19.5791% ( 443) 00:07:39.989 6074.683 - 6099.889: 21.9016% ( 437) 00:07:39.989 6099.889 - 6125.095: 24.2666% ( 445) 00:07:39.989 6125.095 - 6150.302: 26.6263% ( 444) 00:07:39.989 6150.302 - 6175.508: 29.1507% ( 475) 00:07:39.989 6175.508 - 6200.714: 31.6273% ( 466) 00:07:39.989 6200.714 - 6225.920: 34.0455% ( 455) 00:07:39.989 6225.920 - 6251.126: 36.4318% ( 449) 00:07:39.989 6251.126 - 6276.332: 38.8340% ( 452) 00:07:39.989 6276.332 - 6301.538: 41.2521% ( 455) 00:07:39.989 6301.538 - 6326.745: 43.7022% ( 461) 00:07:39.989 6326.745 - 6351.951: 46.2107% ( 472) 00:07:39.989 6351.951 - 6377.157: 48.7192% ( 472) 00:07:39.989 6377.157 - 6402.363: 51.2011% ( 467) 00:07:39.989 6402.363 - 6427.569: 53.7096% ( 472) 00:07:39.989 6427.569 - 6452.775: 56.1969% ( 468) 00:07:39.989 6452.775 - 6503.188: 61.1554% ( 933) 00:07:39.989 6503.188 - 6553.600: 66.1458% ( 939) 00:07:39.989 6553.600 - 6604.012: 70.9290% ( 900) 00:07:39.989 6604.012 - 6654.425: 74.9734% ( 761) 00:07:39.989 6654.425 - 6704.837: 78.2153% ( 610) 00:07:39.989 6704.837 - 6755.249: 80.5857% ( 446) 00:07:39.989 6755.249 - 6805.662: 82.1588% ( 296) 00:07:39.989 6805.662 - 6856.074: 83.3227% ( 219) 00:07:39.989 6856.074 - 6906.486: 84.0668% ( 140) 00:07:39.989 6906.486 - 6956.898: 84.5716% ( 95) 00:07:39.989 6956.898 - 7007.311: 84.9649% ( 74) 00:07:39.989 7007.311 - 7057.723: 85.2679% ( 57) 00:07:39.989 7057.723 - 7108.135: 85.5655% ( 56) 00:07:39.989 7108.135 - 7158.548: 85.8153% ( 47) 00:07:39.989 7158.548 - 7208.960: 86.0810% ( 50) 00:07:39.989 7208.960 - 7259.372: 86.3574% ( 52) 00:07:39.989 7259.372 - 7309.785: 86.5540% ( 37) 00:07:39.989 7309.785 - 7360.197: 86.7453% ( 36) 00:07:39.989 7360.197 - 7410.609: 86.9313% ( 35) 00:07:39.989 7410.609 - 7461.022: 87.1333% ( 38) 00:07:39.989 7461.022 - 7511.434: 87.3618% ( 43) 00:07:39.989 7511.434 - 7561.846: 87.5691% ( 39) 00:07:39.989 7561.846 - 7612.258: 87.7551% ( 35) 00:07:39.989 7612.258 - 7662.671: 88.0208% ( 50) 00:07:39.989 7662.671 - 7713.083: 88.2812% ( 49) 00:07:39.989 7713.083 - 7763.495: 88.5842% ( 57) 00:07:39.989 7763.495 - 7813.908: 88.8552% ( 51) 00:07:39.989 7813.908 - 7864.320: 89.1210% ( 50) 00:07:39.989 7864.320 - 7914.732: 89.3867% ( 50) 00:07:39.989 7914.732 - 7965.145: 89.6259% ( 45) 00:07:39.989 7965.145 - 8015.557: 89.8863% ( 49) 00:07:39.989 8015.557 - 8065.969: 90.1626% ( 52) 00:07:39.989 8065.969 - 8116.382: 90.3912% ( 43) 00:07:39.989 8116.382 - 8166.794: 90.6303% ( 45) 00:07:39.989 8166.794 - 8217.206: 90.8642% ( 44) 00:07:39.989 8217.206 - 8267.618: 91.0874% ( 42) 00:07:39.989 8267.618 - 8318.031: 91.2734% ( 35) 00:07:39.989 8318.031 - 8368.443: 91.4753% ( 38) 00:07:39.989 8368.443 - 8418.855: 91.7358% ( 49) 00:07:39.989 8418.855 - 8469.268: 91.9643% ( 43) 00:07:39.989 8469.268 - 8519.680: 92.1450% ( 34) 00:07:39.989 8519.680 - 8570.092: 92.3151% ( 32) 00:07:39.989 8570.092 - 8620.505: 92.4692% ( 29) 00:07:39.989 8620.505 - 8670.917: 92.6392% ( 32) 00:07:39.989 8670.917 - 8721.329: 92.8040% ( 31) 00:07:39.989 8721.329 - 8771.742: 92.9847% ( 34) 00:07:39.989 8771.742 - 8822.154: 93.1601% ( 33) 00:07:39.989 8822.154 - 8872.566: 93.3195% ( 30) 00:07:39.989 8872.566 - 8922.978: 93.4471% ( 24) 00:07:39.989 8922.978 - 8973.391: 93.6224% ( 33) 00:07:39.989 8973.391 - 9023.803: 93.8085% ( 35) 00:07:39.989 9023.803 - 9074.215: 94.0104% ( 38) 00:07:39.989 9074.215 - 9124.628: 94.2230% ( 40) 00:07:39.989 9124.628 - 9175.040: 94.4090% ( 35) 00:07:39.989 9175.040 - 9225.452: 94.5685% ( 30) 00:07:39.989 9225.452 - 9275.865: 94.7279% ( 30) 00:07:39.989 9275.865 - 9326.277: 94.8342% ( 20) 00:07:39.989 9326.277 - 9376.689: 94.9670% ( 25) 00:07:39.989 9376.689 - 9427.102: 95.1159% ( 28) 00:07:39.989 9427.102 - 9477.514: 95.2700% ( 29) 00:07:39.989 9477.514 - 9527.926: 95.4560% ( 35) 00:07:39.989 9527.926 - 9578.338: 95.5995% ( 27) 00:07:39.989 9578.338 - 9628.751: 95.7483% ( 28) 00:07:39.989 9628.751 - 9679.163: 95.9024% ( 29) 00:07:39.989 9679.163 - 9729.575: 96.0300% ( 24) 00:07:39.989 9729.575 - 9779.988: 96.1735% ( 27) 00:07:39.989 9779.988 - 9830.400: 96.3063% ( 25) 00:07:39.989 9830.400 - 9880.812: 96.4073% ( 19) 00:07:39.989 9880.812 - 9931.225: 96.5295% ( 23) 00:07:39.989 9931.225 - 9981.637: 96.6518% ( 23) 00:07:39.989 9981.637 - 10032.049: 96.7581% ( 20) 00:07:39.989 10032.049 - 10082.462: 96.8644% ( 20) 00:07:39.989 10082.462 - 10132.874: 96.9653% ( 19) 00:07:39.989 10132.874 - 10183.286: 97.0557% ( 17) 00:07:39.989 10183.286 - 10233.698: 97.1195% ( 12) 00:07:39.989 10233.698 - 10284.111: 97.1832% ( 12) 00:07:39.989 10284.111 - 10334.523: 97.2683% ( 16) 00:07:39.989 10334.523 - 10384.935: 97.3214% ( 10) 00:07:39.989 10384.935 - 10435.348: 97.3693% ( 9) 00:07:39.989 10435.348 - 10485.760: 97.4171% ( 9) 00:07:39.989 10485.760 - 10536.172: 97.4649% ( 9) 00:07:39.989 10536.172 - 10586.585: 97.4968% ( 6) 00:07:39.989 10586.585 - 10636.997: 97.5287% ( 6) 00:07:39.989 10636.997 - 10687.409: 97.5659% ( 7) 00:07:39.989 10687.409 - 10737.822: 97.5978% ( 6) 00:07:39.989 10737.822 - 10788.234: 97.6297% ( 6) 00:07:39.989 10788.234 - 10838.646: 97.6669% ( 7) 00:07:39.989 10838.646 - 10889.058: 97.6988% ( 6) 00:07:39.989 10889.058 - 10939.471: 97.7360% ( 7) 00:07:39.989 10939.471 - 10989.883: 97.7679% ( 6) 00:07:39.989 10989.883 - 11040.295: 97.8104% ( 8) 00:07:39.989 11040.295 - 11090.708: 97.8369% ( 5) 00:07:39.989 11090.708 - 11141.120: 97.8582% ( 4) 00:07:39.989 11141.120 - 11191.532: 97.8741% ( 3) 00:07:39.989 11191.532 - 11241.945: 97.8901% ( 3) 00:07:39.989 11241.945 - 11292.357: 97.9060% ( 3) 00:07:39.989 11292.357 - 11342.769: 97.9167% ( 2) 00:07:39.989 11342.769 - 11393.182: 97.9273% ( 2) 00:07:39.989 11393.182 - 11443.594: 97.9432% ( 3) 00:07:39.989 11443.594 - 11494.006: 97.9592% ( 3) 00:07:39.989 11695.655 - 11746.068: 97.9645% ( 1) 00:07:39.989 11746.068 - 11796.480: 98.0070% ( 8) 00:07:39.989 11796.480 - 11846.892: 98.0230% ( 3) 00:07:39.989 11846.892 - 11897.305: 98.0336% ( 2) 00:07:39.989 11897.305 - 11947.717: 98.0389% ( 1) 00:07:39.989 11947.717 - 11998.129: 98.0548% ( 3) 00:07:39.989 11998.129 - 12048.542: 98.0814% ( 5) 00:07:39.989 12048.542 - 12098.954: 98.0974% ( 3) 00:07:39.989 12098.954 - 12149.366: 98.1186% ( 4) 00:07:39.989 12149.366 - 12199.778: 98.1452% ( 5) 00:07:39.989 12199.778 - 12250.191: 98.1718% ( 5) 00:07:39.989 12250.191 - 12300.603: 98.1930% ( 4) 00:07:39.989 12300.603 - 12351.015: 98.2143% ( 4) 00:07:39.989 12351.015 - 12401.428: 98.2249% ( 2) 00:07:39.989 12401.428 - 12451.840: 98.2515% ( 5) 00:07:39.989 12451.840 - 12502.252: 98.2993% ( 9) 00:07:39.989 12502.252 - 12552.665: 98.3365% ( 7) 00:07:39.990 12552.665 - 12603.077: 98.3631% ( 5) 00:07:39.990 12603.077 - 12653.489: 98.4003% ( 7) 00:07:39.990 12653.489 - 12703.902: 98.4322% ( 6) 00:07:39.990 12703.902 - 12754.314: 98.4853% ( 10) 00:07:39.990 12754.314 - 12804.726: 98.5332% ( 9) 00:07:39.990 12804.726 - 12855.138: 98.5704% ( 7) 00:07:39.990 12855.138 - 12905.551: 98.6182% ( 9) 00:07:39.990 12905.551 - 13006.375: 98.6926% ( 14) 00:07:39.990 13006.375 - 13107.200: 98.7404% ( 9) 00:07:39.990 13107.200 - 13208.025: 98.7936% ( 10) 00:07:39.990 13208.025 - 13308.849: 98.8520% ( 11) 00:07:39.990 13308.849 - 13409.674: 98.8999% ( 9) 00:07:39.990 13409.674 - 13510.498: 98.9690% ( 13) 00:07:39.990 13510.498 - 13611.323: 99.0327% ( 12) 00:07:39.990 13611.323 - 13712.148: 99.0699% ( 7) 00:07:39.990 13712.148 - 13812.972: 99.1231% ( 10) 00:07:39.990 13812.972 - 13913.797: 99.1869% ( 12) 00:07:39.990 13913.797 - 14014.622: 99.2453% ( 11) 00:07:39.990 14014.622 - 14115.446: 99.2825% ( 7) 00:07:39.990 14115.446 - 14216.271: 99.3144% ( 6) 00:07:39.990 14216.271 - 14317.095: 99.3197% ( 1) 00:07:39.990 19559.975 - 19660.800: 99.3250% ( 1) 00:07:39.990 19660.800 - 19761.625: 99.3463% ( 4) 00:07:39.990 19761.625 - 19862.449: 99.3676% ( 4) 00:07:39.990 19862.449 - 19963.274: 99.3888% ( 4) 00:07:39.990 19963.274 - 20064.098: 99.4101% ( 4) 00:07:39.990 20064.098 - 20164.923: 99.4260% ( 3) 00:07:39.990 20164.923 - 20265.748: 99.4526% ( 5) 00:07:39.990 20265.748 - 20366.572: 99.4739% ( 4) 00:07:39.990 20366.572 - 20467.397: 99.4951% ( 4) 00:07:39.990 20467.397 - 20568.222: 99.5111% ( 3) 00:07:39.990 20568.222 - 20669.046: 99.5376% ( 5) 00:07:39.990 20669.046 - 20769.871: 99.5589% ( 4) 00:07:39.990 20769.871 - 20870.695: 99.5801% ( 4) 00:07:39.990 20870.695 - 20971.520: 99.6014% ( 4) 00:07:39.990 20971.520 - 21072.345: 99.6227% ( 4) 00:07:39.990 21072.345 - 21173.169: 99.6492% ( 5) 00:07:39.990 21173.169 - 21273.994: 99.6599% ( 2) 00:07:39.990 24399.557 - 24500.382: 99.6652% ( 1) 00:07:39.990 24500.382 - 24601.206: 99.6864% ( 4) 00:07:39.990 24601.206 - 24702.031: 99.7024% ( 3) 00:07:39.990 24702.031 - 24802.855: 99.7183% ( 3) 00:07:39.990 24802.855 - 24903.680: 99.7396% ( 4) 00:07:39.990 24903.680 - 25004.505: 99.7608% ( 4) 00:07:39.990 25004.505 - 25105.329: 99.7821% ( 4) 00:07:39.990 25105.329 - 25206.154: 99.8034% ( 4) 00:07:39.990 25206.154 - 25306.978: 99.8246% ( 4) 00:07:39.990 25306.978 - 25407.803: 99.8459% ( 4) 00:07:39.990 25407.803 - 25508.628: 99.8671% ( 4) 00:07:39.990 25508.628 - 25609.452: 99.8884% ( 4) 00:07:39.990 25609.452 - 25710.277: 99.9097% ( 4) 00:07:39.990 25710.277 - 25811.102: 99.9309% ( 4) 00:07:39.990 25811.102 - 26012.751: 99.9734% ( 8) 00:07:39.990 26012.751 - 26214.400: 100.0000% ( 5) 00:07:39.990 00:07:39.990 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:39.990 ============================================================================== 00:07:39.990 Range in us Cumulative IO count 00:07:39.990 5696.591 - 5721.797: 0.0159% ( 3) 00:07:39.990 5721.797 - 5747.003: 0.1648% ( 28) 00:07:39.990 5747.003 - 5772.209: 0.3667% ( 38) 00:07:39.990 5772.209 - 5797.415: 0.7866% ( 79) 00:07:39.990 5797.415 - 5822.622: 1.3924% ( 114) 00:07:39.990 5822.622 - 5847.828: 2.2428% ( 160) 00:07:39.990 5847.828 - 5873.034: 3.4864% ( 234) 00:07:39.990 5873.034 - 5898.240: 5.0702% ( 298) 00:07:39.990 5898.240 - 5923.446: 6.8187% ( 329) 00:07:39.990 5923.446 - 5948.652: 8.8223% ( 377) 00:07:39.990 5948.652 - 5973.858: 10.7462% ( 362) 00:07:39.990 5973.858 - 5999.065: 12.8082% ( 388) 00:07:39.990 5999.065 - 6024.271: 15.0298% ( 418) 00:07:39.990 6024.271 - 6049.477: 17.4426% ( 454) 00:07:39.990 6049.477 - 6074.683: 19.8023% ( 444) 00:07:39.990 6074.683 - 6099.889: 22.1726% ( 446) 00:07:39.990 6099.889 - 6125.095: 24.5057% ( 439) 00:07:39.990 6125.095 - 6150.302: 26.8920% ( 449) 00:07:39.990 6150.302 - 6175.508: 29.3420% ( 461) 00:07:39.990 6175.508 - 6200.714: 31.7921% ( 461) 00:07:39.990 6200.714 - 6225.920: 34.2474% ( 462) 00:07:39.990 6225.920 - 6251.126: 36.6762% ( 457) 00:07:39.990 6251.126 - 6276.332: 39.1263% ( 461) 00:07:39.990 6276.332 - 6301.538: 41.5923% ( 464) 00:07:39.990 6301.538 - 6326.745: 44.0636% ( 465) 00:07:39.990 6326.745 - 6351.951: 46.4923% ( 457) 00:07:39.990 6351.951 - 6377.157: 49.0487% ( 481) 00:07:39.990 6377.157 - 6402.363: 51.5359% ( 468) 00:07:39.990 6402.363 - 6427.569: 53.9860% ( 461) 00:07:39.990 6427.569 - 6452.775: 56.4998% ( 473) 00:07:39.990 6452.775 - 6503.188: 61.5327% ( 947) 00:07:39.990 6503.188 - 6553.600: 66.4488% ( 925) 00:07:39.990 6553.600 - 6604.012: 71.2266% ( 899) 00:07:39.990 6604.012 - 6654.425: 75.4464% ( 794) 00:07:39.990 6654.425 - 6704.837: 78.7096% ( 614) 00:07:39.990 6704.837 - 6755.249: 80.9790% ( 427) 00:07:39.990 6755.249 - 6805.662: 82.5096% ( 288) 00:07:39.990 6805.662 - 6856.074: 83.6150% ( 208) 00:07:39.990 6856.074 - 6906.486: 84.3378% ( 136) 00:07:39.990 6906.486 - 6956.898: 84.8480% ( 96) 00:07:39.990 6956.898 - 7007.311: 85.2838% ( 82) 00:07:39.990 7007.311 - 7057.723: 85.6027% ( 60) 00:07:39.990 7057.723 - 7108.135: 85.8897% ( 54) 00:07:39.990 7108.135 - 7158.548: 86.1288% ( 45) 00:07:39.990 7158.548 - 7208.960: 86.3627% ( 44) 00:07:39.990 7208.960 - 7259.372: 86.6443% ( 53) 00:07:39.990 7259.372 - 7309.785: 86.8676% ( 42) 00:07:39.990 7309.785 - 7360.197: 87.0642% ( 37) 00:07:39.990 7360.197 - 7410.609: 87.2449% ( 34) 00:07:39.990 7410.609 - 7461.022: 87.4256% ( 34) 00:07:39.990 7461.022 - 7511.434: 87.5850% ( 30) 00:07:39.990 7511.434 - 7561.846: 87.7870% ( 38) 00:07:39.990 7561.846 - 7612.258: 88.0208% ( 44) 00:07:39.990 7612.258 - 7662.671: 88.2653% ( 46) 00:07:39.990 7662.671 - 7713.083: 88.4991% ( 44) 00:07:39.990 7713.083 - 7763.495: 88.7277% ( 43) 00:07:39.990 7763.495 - 7813.908: 88.9668% ( 45) 00:07:39.990 7813.908 - 7864.320: 89.2060% ( 45) 00:07:39.990 7864.320 - 7914.732: 89.4239% ( 41) 00:07:39.990 7914.732 - 7965.145: 89.6524% ( 43) 00:07:39.990 7965.145 - 8015.557: 89.8384% ( 35) 00:07:39.990 8015.557 - 8065.969: 90.0776% ( 45) 00:07:39.990 8065.969 - 8116.382: 90.3008% ( 42) 00:07:39.990 8116.382 - 8166.794: 90.5134% ( 40) 00:07:39.990 8166.794 - 8217.206: 90.7047% ( 36) 00:07:39.990 8217.206 - 8267.618: 90.8960% ( 36) 00:07:39.990 8267.618 - 8318.031: 91.0874% ( 36) 00:07:39.990 8318.031 - 8368.443: 91.2787% ( 36) 00:07:39.990 8368.443 - 8418.855: 91.4541% ( 33) 00:07:39.990 8418.855 - 8469.268: 91.6720% ( 41) 00:07:39.990 8469.268 - 8519.680: 91.8474% ( 33) 00:07:39.990 8519.680 - 8570.092: 92.0865% ( 45) 00:07:39.990 8570.092 - 8620.505: 92.3151% ( 43) 00:07:39.990 8620.505 - 8670.917: 92.5276% ( 40) 00:07:39.990 8670.917 - 8721.329: 92.7509% ( 42) 00:07:39.990 8721.329 - 8771.742: 92.9741% ( 42) 00:07:39.990 8771.742 - 8822.154: 93.1707% ( 37) 00:07:39.990 8822.154 - 8872.566: 93.3833% ( 40) 00:07:39.990 8872.566 - 8922.978: 93.5693% ( 35) 00:07:39.990 8922.978 - 8973.391: 93.7553% ( 35) 00:07:39.990 8973.391 - 9023.803: 93.9360% ( 34) 00:07:39.990 9023.803 - 9074.215: 94.1114% ( 33) 00:07:39.990 9074.215 - 9124.628: 94.3134% ( 38) 00:07:39.990 9124.628 - 9175.040: 94.4834% ( 32) 00:07:39.990 9175.040 - 9225.452: 94.6694% ( 35) 00:07:39.990 9225.452 - 9275.865: 94.8554% ( 35) 00:07:39.990 9275.865 - 9326.277: 95.0308% ( 33) 00:07:39.990 9326.277 - 9376.689: 95.2009% ( 32) 00:07:39.990 9376.689 - 9427.102: 95.3550% ( 29) 00:07:39.990 9427.102 - 9477.514: 95.4985% ( 27) 00:07:39.990 9477.514 - 9527.926: 95.6048% ( 20) 00:07:39.990 9527.926 - 9578.338: 95.6952% ( 17) 00:07:39.990 9578.338 - 9628.751: 95.7855% ( 17) 00:07:39.990 9628.751 - 9679.163: 95.8865% ( 19) 00:07:39.990 9679.163 - 9729.575: 95.9503% ( 12) 00:07:39.990 9729.575 - 9779.988: 96.0512% ( 19) 00:07:39.990 9779.988 - 9830.400: 96.1575% ( 20) 00:07:39.990 9830.400 - 9880.812: 96.2532% ( 18) 00:07:39.990 9880.812 - 9931.225: 96.3489% ( 18) 00:07:39.990 9931.225 - 9981.637: 96.4658% ( 22) 00:07:39.990 9981.637 - 10032.049: 96.5614% ( 18) 00:07:39.990 10032.049 - 10082.462: 96.6465% ( 16) 00:07:39.990 10082.462 - 10132.874: 96.7315% ( 16) 00:07:39.990 10132.874 - 10183.286: 96.8165% ( 16) 00:07:39.990 10183.286 - 10233.698: 96.9122% ( 18) 00:07:39.990 10233.698 - 10284.111: 97.0026% ( 17) 00:07:39.990 10284.111 - 10334.523: 97.0716% ( 13) 00:07:39.990 10334.523 - 10384.935: 97.1407% ( 13) 00:07:39.990 10384.935 - 10435.348: 97.1992% ( 11) 00:07:39.990 10435.348 - 10485.760: 97.2577% ( 11) 00:07:39.990 10485.760 - 10536.172: 97.3161% ( 11) 00:07:39.990 10536.172 - 10586.585: 97.3746% ( 11) 00:07:39.990 10586.585 - 10636.997: 97.4171% ( 8) 00:07:39.990 10636.997 - 10687.409: 97.4543% ( 7) 00:07:39.990 10687.409 - 10737.822: 97.4862% ( 6) 00:07:39.990 10737.822 - 10788.234: 97.5287% ( 8) 00:07:39.990 10788.234 - 10838.646: 97.5606% ( 6) 00:07:39.990 10838.646 - 10889.058: 97.5978% ( 7) 00:07:39.990 10889.058 - 10939.471: 97.6350% ( 7) 00:07:39.990 10939.471 - 10989.883: 97.6828% ( 9) 00:07:39.990 10989.883 - 11040.295: 97.7253% ( 8) 00:07:39.990 11040.295 - 11090.708: 97.7732% ( 9) 00:07:39.990 11090.708 - 11141.120: 97.8051% ( 6) 00:07:39.990 11141.120 - 11191.532: 97.8476% ( 8) 00:07:39.990 11191.532 - 11241.945: 97.8848% ( 7) 00:07:39.990 11241.945 - 11292.357: 97.9114% ( 5) 00:07:39.990 11292.357 - 11342.769: 97.9486% ( 7) 00:07:39.990 11342.769 - 11393.182: 97.9804% ( 6) 00:07:39.991 11393.182 - 11443.594: 98.0123% ( 6) 00:07:39.991 11443.594 - 11494.006: 98.0495% ( 7) 00:07:39.991 11494.006 - 11544.418: 98.0708% ( 4) 00:07:39.991 11544.418 - 11594.831: 98.0920% ( 4) 00:07:39.991 11594.831 - 11645.243: 98.1080% ( 3) 00:07:39.991 11645.243 - 11695.655: 98.1239% ( 3) 00:07:39.991 11695.655 - 11746.068: 98.1346% ( 2) 00:07:39.991 11746.068 - 11796.480: 98.1399% ( 1) 00:07:39.991 11796.480 - 11846.892: 98.1505% ( 2) 00:07:39.991 11846.892 - 11897.305: 98.1611% ( 2) 00:07:39.991 11897.305 - 11947.717: 98.1718% ( 2) 00:07:39.991 11947.717 - 11998.129: 98.1824% ( 2) 00:07:39.991 11998.129 - 12048.542: 98.1930% ( 2) 00:07:39.991 12048.542 - 12098.954: 98.2037% ( 2) 00:07:39.991 12098.954 - 12149.366: 98.2143% ( 2) 00:07:39.991 12149.366 - 12199.778: 98.2249% ( 2) 00:07:39.991 12199.778 - 12250.191: 98.2355% ( 2) 00:07:39.991 12250.191 - 12300.603: 98.2462% ( 2) 00:07:39.991 12300.603 - 12351.015: 98.2887% ( 8) 00:07:39.991 12351.015 - 12401.428: 98.2993% ( 2) 00:07:39.991 12401.428 - 12451.840: 98.3153% ( 3) 00:07:39.991 12451.840 - 12502.252: 98.3206% ( 1) 00:07:39.991 12502.252 - 12552.665: 98.3365% ( 3) 00:07:39.991 12552.665 - 12603.077: 98.3578% ( 4) 00:07:39.991 12603.077 - 12653.489: 98.4056% ( 9) 00:07:39.991 12653.489 - 12703.902: 98.4109% ( 1) 00:07:39.991 12703.902 - 12754.314: 98.4162% ( 1) 00:07:39.991 12754.314 - 12804.726: 98.4375% ( 4) 00:07:39.991 12804.726 - 12855.138: 98.4534% ( 3) 00:07:39.991 12855.138 - 12905.551: 98.4747% ( 4) 00:07:39.991 12905.551 - 13006.375: 98.5013% ( 5) 00:07:39.991 13006.375 - 13107.200: 98.5385% ( 7) 00:07:39.991 13107.200 - 13208.025: 98.6076% ( 13) 00:07:39.991 13208.025 - 13308.849: 98.6873% ( 15) 00:07:39.991 13308.849 - 13409.674: 98.7457% ( 11) 00:07:39.991 13409.674 - 13510.498: 98.7989% ( 10) 00:07:39.991 13510.498 - 13611.323: 98.8520% ( 10) 00:07:39.991 13611.323 - 13712.148: 98.9158% ( 12) 00:07:39.991 13712.148 - 13812.972: 98.9690% ( 10) 00:07:39.991 13812.972 - 13913.797: 99.0274% ( 11) 00:07:39.991 13913.797 - 14014.622: 99.0912% ( 12) 00:07:39.991 14014.622 - 14115.446: 99.1497% ( 11) 00:07:39.991 14115.446 - 14216.271: 99.1922% ( 8) 00:07:39.991 14216.271 - 14317.095: 99.2347% ( 8) 00:07:39.991 14317.095 - 14417.920: 99.2719% ( 7) 00:07:39.991 14417.920 - 14518.745: 99.2985% ( 5) 00:07:39.991 14518.745 - 14619.569: 99.3197% ( 4) 00:07:39.991 18249.255 - 18350.080: 99.3357% ( 3) 00:07:39.991 18350.080 - 18450.905: 99.3569% ( 4) 00:07:39.991 18450.905 - 18551.729: 99.3782% ( 4) 00:07:39.991 18551.729 - 18652.554: 99.3941% ( 3) 00:07:39.991 18652.554 - 18753.378: 99.4207% ( 5) 00:07:39.991 18753.378 - 18854.203: 99.4420% ( 4) 00:07:39.991 18854.203 - 18955.028: 99.4632% ( 4) 00:07:39.991 18955.028 - 19055.852: 99.4845% ( 4) 00:07:39.991 19055.852 - 19156.677: 99.5004% ( 3) 00:07:39.991 19156.677 - 19257.502: 99.5217% ( 4) 00:07:39.991 19257.502 - 19358.326: 99.5376% ( 3) 00:07:39.991 19358.326 - 19459.151: 99.5589% ( 4) 00:07:39.991 19459.151 - 19559.975: 99.5801% ( 4) 00:07:39.991 19559.975 - 19660.800: 99.6067% ( 5) 00:07:39.991 19660.800 - 19761.625: 99.6280% ( 4) 00:07:39.991 19761.625 - 19862.449: 99.6492% ( 4) 00:07:39.991 19862.449 - 19963.274: 99.6599% ( 2) 00:07:39.991 23088.837 - 23189.662: 99.6758% ( 3) 00:07:39.991 23189.662 - 23290.486: 99.6918% ( 3) 00:07:39.991 23290.486 - 23391.311: 99.7130% ( 4) 00:07:39.991 23391.311 - 23492.135: 99.7343% ( 4) 00:07:39.991 23492.135 - 23592.960: 99.7555% ( 4) 00:07:39.991 23592.960 - 23693.785: 99.7768% ( 4) 00:07:39.991 23693.785 - 23794.609: 99.7980% ( 4) 00:07:39.991 23794.609 - 23895.434: 99.8140% ( 3) 00:07:39.991 23895.434 - 23996.258: 99.8352% ( 4) 00:07:39.991 23996.258 - 24097.083: 99.8565% ( 4) 00:07:39.991 24097.083 - 24197.908: 99.8778% ( 4) 00:07:39.991 24197.908 - 24298.732: 99.8990% ( 4) 00:07:39.991 24298.732 - 24399.557: 99.9203% ( 4) 00:07:39.991 24399.557 - 24500.382: 99.9415% ( 4) 00:07:39.991 24500.382 - 24601.206: 99.9575% ( 3) 00:07:39.991 24601.206 - 24702.031: 99.9787% ( 4) 00:07:39.991 24702.031 - 24802.855: 100.0000% ( 4) 00:07:39.991 00:07:39.991 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:39.991 ============================================================================== 00:07:39.991 Range in us Cumulative IO count 00:07:39.991 5696.591 - 5721.797: 0.0744% ( 14) 00:07:39.991 5721.797 - 5747.003: 0.1860% ( 21) 00:07:39.991 5747.003 - 5772.209: 0.3295% ( 27) 00:07:39.991 5772.209 - 5797.415: 0.6324% ( 57) 00:07:39.991 5797.415 - 5822.622: 1.2596% ( 118) 00:07:39.991 5822.622 - 5847.828: 2.1365% ( 165) 00:07:39.991 5847.828 - 5873.034: 3.3535% ( 229) 00:07:39.991 5873.034 - 5898.240: 4.8788% ( 287) 00:07:39.991 5898.240 - 5923.446: 6.5955% ( 323) 00:07:39.991 5923.446 - 5948.652: 8.6150% ( 380) 00:07:39.991 5948.652 - 5973.858: 10.7834% ( 408) 00:07:39.991 5973.858 - 5999.065: 12.9517% ( 408) 00:07:39.991 5999.065 - 6024.271: 15.3752% ( 456) 00:07:39.991 6024.271 - 6049.477: 17.7402% ( 445) 00:07:39.991 6049.477 - 6074.683: 19.9192% ( 410) 00:07:39.991 6074.683 - 6099.889: 22.2205% ( 433) 00:07:39.991 6099.889 - 6125.095: 24.7608% ( 478) 00:07:39.991 6125.095 - 6150.302: 27.1631% ( 452) 00:07:39.991 6150.302 - 6175.508: 29.5812% ( 455) 00:07:39.991 6175.508 - 6200.714: 31.8824% ( 433) 00:07:39.991 6200.714 - 6225.920: 34.2900% ( 453) 00:07:39.991 6225.920 - 6251.126: 36.6815% ( 450) 00:07:39.991 6251.126 - 6276.332: 39.0678% ( 449) 00:07:39.991 6276.332 - 6301.538: 41.4807% ( 454) 00:07:39.991 6301.538 - 6326.745: 43.9892% ( 472) 00:07:39.991 6326.745 - 6351.951: 46.5402% ( 480) 00:07:39.991 6351.951 - 6377.157: 49.0274% ( 468) 00:07:39.991 6377.157 - 6402.363: 51.5200% ( 469) 00:07:39.991 6402.363 - 6427.569: 54.0125% ( 469) 00:07:39.991 6427.569 - 6452.775: 56.4892% ( 466) 00:07:39.991 6452.775 - 6503.188: 61.5009% ( 943) 00:07:39.991 6503.188 - 6553.600: 66.6135% ( 962) 00:07:39.991 6553.600 - 6604.012: 71.3329% ( 888) 00:07:39.991 6604.012 - 6654.425: 75.5634% ( 796) 00:07:39.991 6654.425 - 6704.837: 78.7681% ( 603) 00:07:39.991 6704.837 - 6755.249: 81.0480% ( 429) 00:07:39.991 6755.249 - 6805.662: 82.6531% ( 302) 00:07:39.991 6805.662 - 6856.074: 83.8116% ( 218) 00:07:39.991 6856.074 - 6906.486: 84.5823% ( 145) 00:07:39.991 6906.486 - 6956.898: 85.1137% ( 100) 00:07:39.991 6956.898 - 7007.311: 85.5336% ( 79) 00:07:39.991 7007.311 - 7057.723: 85.8472% ( 59) 00:07:39.991 7057.723 - 7108.135: 86.1182% ( 51) 00:07:39.991 7108.135 - 7158.548: 86.3786% ( 49) 00:07:39.991 7158.548 - 7208.960: 86.6337% ( 48) 00:07:39.991 7208.960 - 7259.372: 86.8835% ( 47) 00:07:39.991 7259.372 - 7309.785: 87.0908% ( 39) 00:07:39.991 7309.785 - 7360.197: 87.2927% ( 38) 00:07:39.991 7360.197 - 7410.609: 87.4415% ( 28) 00:07:39.991 7410.609 - 7461.022: 87.6488% ( 39) 00:07:39.991 7461.022 - 7511.434: 87.8348% ( 35) 00:07:39.991 7511.434 - 7561.846: 88.0261% ( 36) 00:07:39.991 7561.846 - 7612.258: 88.2122% ( 35) 00:07:39.991 7612.258 - 7662.671: 88.3929% ( 34) 00:07:39.991 7662.671 - 7713.083: 88.5417% ( 28) 00:07:39.991 7713.083 - 7763.495: 88.7064% ( 31) 00:07:39.991 7763.495 - 7813.908: 88.9084% ( 38) 00:07:39.991 7813.908 - 7864.320: 89.0678% ( 30) 00:07:39.991 7864.320 - 7914.732: 89.2538% ( 35) 00:07:39.991 7914.732 - 7965.145: 89.4664% ( 40) 00:07:39.991 7965.145 - 8015.557: 89.7003% ( 44) 00:07:39.991 8015.557 - 8065.969: 89.9554% ( 48) 00:07:39.991 8065.969 - 8116.382: 90.1945% ( 45) 00:07:39.991 8116.382 - 8166.794: 90.4815% ( 54) 00:07:39.991 8166.794 - 8217.206: 90.7207% ( 45) 00:07:39.991 8217.206 - 8267.618: 90.9226% ( 38) 00:07:39.991 8267.618 - 8318.031: 91.1458% ( 42) 00:07:39.991 8318.031 - 8368.443: 91.3903% ( 46) 00:07:39.991 8368.443 - 8418.855: 91.6348% ( 46) 00:07:39.991 8418.855 - 8469.268: 91.8261% ( 36) 00:07:39.991 8469.268 - 8519.680: 92.0281% ( 38) 00:07:39.991 8519.680 - 8570.092: 92.2300% ( 38) 00:07:39.991 8570.092 - 8620.505: 92.4267% ( 37) 00:07:39.991 8620.505 - 8670.917: 92.6233% ( 37) 00:07:39.991 8670.917 - 8721.329: 92.8199% ( 37) 00:07:39.991 8721.329 - 8771.742: 93.0272% ( 39) 00:07:39.991 8771.742 - 8822.154: 93.2292% ( 38) 00:07:39.991 8822.154 - 8872.566: 93.4152% ( 35) 00:07:39.991 8872.566 - 8922.978: 93.6065% ( 36) 00:07:39.991 8922.978 - 8973.391: 93.7819% ( 33) 00:07:39.991 8973.391 - 9023.803: 93.9307% ( 28) 00:07:39.991 9023.803 - 9074.215: 94.0742% ( 27) 00:07:39.991 9074.215 - 9124.628: 94.2389% ( 31) 00:07:39.991 9124.628 - 9175.040: 94.3931% ( 29) 00:07:39.991 9175.040 - 9225.452: 94.4994% ( 20) 00:07:39.991 9225.452 - 9275.865: 94.6216% ( 23) 00:07:39.991 9275.865 - 9326.277: 94.7385% ( 22) 00:07:39.991 9326.277 - 9376.689: 94.9033% ( 31) 00:07:39.991 9376.689 - 9427.102: 95.0521% ( 28) 00:07:39.991 9427.102 - 9477.514: 95.1743% ( 23) 00:07:39.991 9477.514 - 9527.926: 95.3125% ( 26) 00:07:39.991 9527.926 - 9578.338: 95.4560% ( 27) 00:07:39.991 9578.338 - 9628.751: 95.5570% ( 19) 00:07:39.991 9628.751 - 9679.163: 95.6686% ( 21) 00:07:39.991 9679.163 - 9729.575: 95.7961% ( 24) 00:07:39.991 9729.575 - 9779.988: 95.8865% ( 17) 00:07:39.991 9779.988 - 9830.400: 95.9768% ( 17) 00:07:39.991 9830.400 - 9880.812: 96.0725% ( 18) 00:07:39.991 9880.812 - 9931.225: 96.1788% ( 20) 00:07:39.991 9931.225 - 9981.637: 96.2744% ( 18) 00:07:39.991 9981.637 - 10032.049: 96.3542% ( 15) 00:07:39.991 10032.049 - 10082.462: 96.4339% ( 15) 00:07:39.991 10082.462 - 10132.874: 96.5030% ( 13) 00:07:39.992 10132.874 - 10183.286: 96.5614% ( 11) 00:07:39.992 10183.286 - 10233.698: 96.6358% ( 14) 00:07:39.992 10233.698 - 10284.111: 96.7209% ( 16) 00:07:39.992 10284.111 - 10334.523: 96.7953% ( 14) 00:07:39.992 10334.523 - 10384.935: 96.8697% ( 14) 00:07:39.992 10384.935 - 10435.348: 96.9494% ( 15) 00:07:39.992 10435.348 - 10485.760: 97.0238% ( 14) 00:07:39.992 10485.760 - 10536.172: 97.0876% ( 12) 00:07:39.992 10536.172 - 10586.585: 97.1301% ( 8) 00:07:39.992 10586.585 - 10636.997: 97.1726% ( 8) 00:07:39.992 10636.997 - 10687.409: 97.2258% ( 10) 00:07:39.992 10687.409 - 10737.822: 97.2789% ( 10) 00:07:39.992 10737.822 - 10788.234: 97.3321% ( 10) 00:07:39.992 10788.234 - 10838.646: 97.3905% ( 11) 00:07:39.992 10838.646 - 10889.058: 97.4490% ( 11) 00:07:39.992 10889.058 - 10939.471: 97.5181% ( 13) 00:07:39.992 10939.471 - 10989.883: 97.5712% ( 10) 00:07:39.992 10989.883 - 11040.295: 97.6350% ( 12) 00:07:39.992 11040.295 - 11090.708: 97.6828% ( 9) 00:07:39.992 11090.708 - 11141.120: 97.7253% ( 8) 00:07:39.992 11141.120 - 11191.532: 97.7732% ( 9) 00:07:39.992 11191.532 - 11241.945: 97.8157% ( 8) 00:07:39.992 11241.945 - 11292.357: 97.8635% ( 9) 00:07:39.992 11292.357 - 11342.769: 97.9167% ( 10) 00:07:39.992 11342.769 - 11393.182: 97.9645% ( 9) 00:07:39.992 11393.182 - 11443.594: 98.0017% ( 7) 00:07:39.992 11443.594 - 11494.006: 98.0336% ( 6) 00:07:39.992 11494.006 - 11544.418: 98.0655% ( 6) 00:07:39.992 11544.418 - 11594.831: 98.1027% ( 7) 00:07:39.992 11594.831 - 11645.243: 98.1399% ( 7) 00:07:39.992 11645.243 - 11695.655: 98.1718% ( 6) 00:07:39.992 11695.655 - 11746.068: 98.2249% ( 10) 00:07:39.992 11746.068 - 11796.480: 98.2727% ( 9) 00:07:39.992 11796.480 - 11846.892: 98.3046% ( 6) 00:07:39.992 11846.892 - 11897.305: 98.3312% ( 5) 00:07:39.992 11897.305 - 11947.717: 98.3525% ( 4) 00:07:39.992 11947.717 - 11998.129: 98.3737% ( 4) 00:07:39.992 11998.129 - 12048.542: 98.3844% ( 2) 00:07:39.992 12048.542 - 12098.954: 98.3950% ( 2) 00:07:39.992 12098.954 - 12149.366: 98.4056% ( 2) 00:07:39.992 12149.366 - 12199.778: 98.4162% ( 2) 00:07:39.992 12199.778 - 12250.191: 98.4269% ( 2) 00:07:39.992 12250.191 - 12300.603: 98.4375% ( 2) 00:07:39.992 12300.603 - 12351.015: 98.4481% ( 2) 00:07:39.992 12351.015 - 12401.428: 98.4588% ( 2) 00:07:39.992 12401.428 - 12451.840: 98.4694% ( 2) 00:07:39.992 12451.840 - 12502.252: 98.4800% ( 2) 00:07:39.992 12502.252 - 12552.665: 98.4906% ( 2) 00:07:39.992 12552.665 - 12603.077: 98.5013% ( 2) 00:07:39.992 12603.077 - 12653.489: 98.5119% ( 2) 00:07:39.992 12653.489 - 12703.902: 98.5225% ( 2) 00:07:39.992 12703.902 - 12754.314: 98.5332% ( 2) 00:07:39.992 12754.314 - 12804.726: 98.5438% ( 2) 00:07:39.992 12804.726 - 12855.138: 98.5544% ( 2) 00:07:39.992 12855.138 - 12905.551: 98.5651% ( 2) 00:07:39.992 12905.551 - 13006.375: 98.6023% ( 7) 00:07:39.992 13006.375 - 13107.200: 98.6448% ( 8) 00:07:39.992 13107.200 - 13208.025: 98.6979% ( 10) 00:07:39.992 13208.025 - 13308.849: 98.7457% ( 9) 00:07:39.992 13308.849 - 13409.674: 98.7723% ( 5) 00:07:39.992 13409.674 - 13510.498: 98.8042% ( 6) 00:07:39.992 13510.498 - 13611.323: 98.8308% ( 5) 00:07:39.992 13611.323 - 13712.148: 98.8627% ( 6) 00:07:39.992 13712.148 - 13812.972: 98.9052% ( 8) 00:07:39.992 13812.972 - 13913.797: 98.9583% ( 10) 00:07:39.992 13913.797 - 14014.622: 99.0009% ( 8) 00:07:39.992 14014.622 - 14115.446: 99.0434% ( 8) 00:07:39.992 14115.446 - 14216.271: 99.0806% ( 7) 00:07:39.992 14216.271 - 14317.095: 99.0965% ( 3) 00:07:39.992 14317.095 - 14417.920: 99.1178% ( 4) 00:07:39.992 14417.920 - 14518.745: 99.1443% ( 5) 00:07:39.992 14518.745 - 14619.569: 99.1656% ( 4) 00:07:39.992 14619.569 - 14720.394: 99.1869% ( 4) 00:07:39.992 14720.394 - 14821.218: 99.2081% ( 4) 00:07:39.992 14821.218 - 14922.043: 99.2294% ( 4) 00:07:39.992 14922.043 - 15022.868: 99.2506% ( 4) 00:07:39.992 15022.868 - 15123.692: 99.2719% ( 4) 00:07:39.992 15123.692 - 15224.517: 99.2985% ( 5) 00:07:39.992 15224.517 - 15325.342: 99.3197% ( 4) 00:07:39.992 16535.237 - 16636.062: 99.3357% ( 3) 00:07:39.992 16636.062 - 16736.886: 99.3622% ( 5) 00:07:39.992 16736.886 - 16837.711: 99.3782% ( 3) 00:07:39.992 16837.711 - 16938.535: 99.3994% ( 4) 00:07:39.992 16938.535 - 17039.360: 99.4207% ( 4) 00:07:39.992 17039.360 - 17140.185: 99.4366% ( 3) 00:07:39.992 17140.185 - 17241.009: 99.4579% ( 4) 00:07:39.992 17241.009 - 17341.834: 99.4792% ( 4) 00:07:39.992 17341.834 - 17442.658: 99.5004% ( 4) 00:07:39.992 17442.658 - 17543.483: 99.5217% ( 4) 00:07:39.992 17543.483 - 17644.308: 99.5429% ( 4) 00:07:39.992 17644.308 - 17745.132: 99.5642% ( 4) 00:07:39.992 17745.132 - 17845.957: 99.5801% ( 3) 00:07:39.992 17845.957 - 17946.782: 99.6067% ( 5) 00:07:39.992 17946.782 - 18047.606: 99.6280% ( 4) 00:07:39.992 18047.606 - 18148.431: 99.6439% ( 3) 00:07:39.992 18148.431 - 18249.255: 99.6599% ( 3) 00:07:39.992 21374.818 - 21475.643: 99.6705% ( 2) 00:07:39.992 21475.643 - 21576.468: 99.6918% ( 4) 00:07:39.992 21576.468 - 21677.292: 99.7130% ( 4) 00:07:39.992 21677.292 - 21778.117: 99.7343% ( 4) 00:07:39.992 21778.117 - 21878.942: 99.7502% ( 3) 00:07:39.992 21878.942 - 21979.766: 99.7715% ( 4) 00:07:39.992 21979.766 - 22080.591: 99.7927% ( 4) 00:07:39.992 22080.591 - 22181.415: 99.8140% ( 4) 00:07:39.992 22181.415 - 22282.240: 99.8352% ( 4) 00:07:39.992 22282.240 - 22383.065: 99.8565% ( 4) 00:07:39.992 22383.065 - 22483.889: 99.8778% ( 4) 00:07:39.992 22483.889 - 22584.714: 99.8937% ( 3) 00:07:39.992 22584.714 - 22685.538: 99.9150% ( 4) 00:07:39.992 22685.538 - 22786.363: 99.9362% ( 4) 00:07:39.992 22786.363 - 22887.188: 99.9575% ( 4) 00:07:39.992 22887.188 - 22988.012: 99.9787% ( 4) 00:07:39.992 22988.012 - 23088.837: 100.0000% ( 4) 00:07:39.992 00:07:39.992 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:39.992 ============================================================================== 00:07:39.992 Range in us Cumulative IO count 00:07:39.992 5671.385 - 5696.591: 0.0053% ( 1) 00:07:39.992 5696.591 - 5721.797: 0.0159% ( 2) 00:07:39.992 5721.797 - 5747.003: 0.0638% ( 9) 00:07:39.992 5747.003 - 5772.209: 0.2604% ( 37) 00:07:39.992 5772.209 - 5797.415: 0.6218% ( 68) 00:07:39.992 5797.415 - 5822.622: 1.2330% ( 115) 00:07:39.992 5822.622 - 5847.828: 1.9505% ( 135) 00:07:39.992 5847.828 - 5873.034: 3.0453% ( 206) 00:07:39.992 5873.034 - 5898.240: 4.5599% ( 285) 00:07:39.992 5898.240 - 5923.446: 6.3776% ( 342) 00:07:39.992 5923.446 - 5948.652: 8.5459% ( 408) 00:07:39.992 5948.652 - 5973.858: 11.0916% ( 479) 00:07:39.992 5973.858 - 5999.065: 13.4194% ( 438) 00:07:39.992 5999.065 - 6024.271: 15.8960% ( 466) 00:07:39.992 6024.271 - 6049.477: 18.1760% ( 429) 00:07:39.992 6049.477 - 6074.683: 20.3656% ( 412) 00:07:39.992 6074.683 - 6099.889: 22.5287% ( 407) 00:07:39.992 6099.889 - 6125.095: 24.8034% ( 428) 00:07:39.992 6125.095 - 6150.302: 27.0780% ( 428) 00:07:39.992 6150.302 - 6175.508: 29.3899% ( 435) 00:07:39.992 6175.508 - 6200.714: 31.8452% ( 462) 00:07:39.992 6200.714 - 6225.920: 34.2156% ( 446) 00:07:39.992 6225.920 - 6251.126: 36.6284% ( 454) 00:07:39.992 6251.126 - 6276.332: 39.0944% ( 464) 00:07:39.992 6276.332 - 6301.538: 41.5285% ( 458) 00:07:39.992 6301.538 - 6326.745: 44.0104% ( 467) 00:07:39.992 6326.745 - 6351.951: 46.4498% ( 459) 00:07:39.992 6351.951 - 6377.157: 48.8627% ( 454) 00:07:39.992 6377.157 - 6402.363: 51.3605% ( 470) 00:07:39.992 6402.363 - 6427.569: 53.9116% ( 480) 00:07:39.992 6427.569 - 6452.775: 56.4254% ( 473) 00:07:39.992 6452.775 - 6503.188: 61.4211% ( 940) 00:07:39.992 6503.188 - 6553.600: 66.4328% ( 943) 00:07:39.992 6553.600 - 6604.012: 71.1894% ( 895) 00:07:39.992 6604.012 - 6654.425: 75.4199% ( 796) 00:07:39.992 6654.425 - 6704.837: 78.6139% ( 601) 00:07:39.992 6704.837 - 6755.249: 80.9471% ( 439) 00:07:39.992 6755.249 - 6805.662: 82.4458% ( 282) 00:07:39.992 6805.662 - 6856.074: 83.5991% ( 217) 00:07:39.992 6856.074 - 6906.486: 84.3272% ( 137) 00:07:39.992 6906.486 - 6956.898: 84.8905% ( 106) 00:07:39.992 6956.898 - 7007.311: 85.3369% ( 84) 00:07:39.992 7007.311 - 7057.723: 85.6399% ( 57) 00:07:39.992 7057.723 - 7108.135: 85.9960% ( 67) 00:07:39.992 7108.135 - 7158.548: 86.3042% ( 58) 00:07:39.992 7158.548 - 7208.960: 86.6178% ( 59) 00:07:39.992 7208.960 - 7259.372: 86.9313% ( 59) 00:07:39.992 7259.372 - 7309.785: 87.1811% ( 47) 00:07:39.992 7309.785 - 7360.197: 87.4097% ( 43) 00:07:39.992 7360.197 - 7410.609: 87.6063% ( 37) 00:07:39.992 7410.609 - 7461.022: 87.7976% ( 36) 00:07:39.992 7461.022 - 7511.434: 87.9889% ( 36) 00:07:39.992 7511.434 - 7561.846: 88.1590% ( 32) 00:07:39.992 7561.846 - 7612.258: 88.3929% ( 44) 00:07:39.992 7612.258 - 7662.671: 88.6214% ( 43) 00:07:39.992 7662.671 - 7713.083: 88.8287% ( 39) 00:07:39.992 7713.083 - 7763.495: 89.0253% ( 37) 00:07:39.992 7763.495 - 7813.908: 89.2273% ( 38) 00:07:39.992 7813.908 - 7864.320: 89.4186% ( 36) 00:07:39.992 7864.320 - 7914.732: 89.6259% ( 39) 00:07:39.992 7914.732 - 7965.145: 89.8438% ( 41) 00:07:39.992 7965.145 - 8015.557: 90.1095% ( 50) 00:07:39.992 8015.557 - 8065.969: 90.3008% ( 36) 00:07:39.992 8065.969 - 8116.382: 90.4921% ( 36) 00:07:39.992 8116.382 - 8166.794: 90.6994% ( 39) 00:07:39.992 8166.794 - 8217.206: 90.8907% ( 36) 00:07:39.992 8217.206 - 8267.618: 91.0821% ( 36) 00:07:39.992 8267.618 - 8318.031: 91.2787% ( 37) 00:07:39.992 8318.031 - 8368.443: 91.4913% ( 40) 00:07:39.992 8368.443 - 8418.855: 91.7092% ( 41) 00:07:39.992 8418.855 - 8469.268: 91.9377% ( 43) 00:07:39.992 8469.268 - 8519.680: 92.1025% ( 31) 00:07:39.992 8519.680 - 8570.092: 92.2778% ( 33) 00:07:39.992 8570.092 - 8620.505: 92.4107% ( 25) 00:07:39.992 8620.505 - 8670.917: 92.5914% ( 34) 00:07:39.992 8670.917 - 8721.329: 92.7402% ( 28) 00:07:39.993 8721.329 - 8771.742: 92.9634% ( 42) 00:07:39.993 8771.742 - 8822.154: 93.1282% ( 31) 00:07:39.993 8822.154 - 8872.566: 93.2929% ( 31) 00:07:39.993 8872.566 - 8922.978: 93.4736% ( 34) 00:07:39.993 8922.978 - 8973.391: 93.6437% ( 32) 00:07:39.993 8973.391 - 9023.803: 93.8031% ( 30) 00:07:39.993 9023.803 - 9074.215: 93.9679% ( 31) 00:07:39.993 9074.215 - 9124.628: 94.1327% ( 31) 00:07:39.993 9124.628 - 9175.040: 94.2974% ( 31) 00:07:39.993 9175.040 - 9225.452: 94.4409% ( 27) 00:07:39.993 9225.452 - 9275.865: 94.5685% ( 24) 00:07:39.993 9275.865 - 9326.277: 94.6641% ( 18) 00:07:39.993 9326.277 - 9376.689: 94.7545% ( 17) 00:07:39.993 9376.689 - 9427.102: 94.8661% ( 21) 00:07:39.993 9427.102 - 9477.514: 94.9989% ( 25) 00:07:39.993 9477.514 - 9527.926: 95.1105% ( 21) 00:07:39.993 9527.926 - 9578.338: 95.2381% ( 24) 00:07:39.993 9578.338 - 9628.751: 95.3816% ( 27) 00:07:39.993 9628.751 - 9679.163: 95.5091% ( 24) 00:07:39.993 9679.163 - 9729.575: 95.6580% ( 28) 00:07:39.993 9729.575 - 9779.988: 95.7696% ( 21) 00:07:39.993 9779.988 - 9830.400: 95.9077% ( 26) 00:07:39.993 9830.400 - 9880.812: 96.0193% ( 21) 00:07:39.993 9880.812 - 9931.225: 96.1522% ( 25) 00:07:39.993 9931.225 - 9981.637: 96.2691% ( 22) 00:07:39.993 9981.637 - 10032.049: 96.3701% ( 19) 00:07:39.993 10032.049 - 10082.462: 96.4764% ( 20) 00:07:39.993 10082.462 - 10132.874: 96.5721% ( 18) 00:07:39.993 10132.874 - 10183.286: 96.6518% ( 15) 00:07:39.993 10183.286 - 10233.698: 96.7421% ( 17) 00:07:39.993 10233.698 - 10284.111: 96.8112% ( 13) 00:07:39.993 10284.111 - 10334.523: 96.8856% ( 14) 00:07:39.993 10334.523 - 10384.935: 96.9281% ( 8) 00:07:39.993 10384.935 - 10435.348: 96.9760% ( 9) 00:07:39.993 10435.348 - 10485.760: 97.0185% ( 8) 00:07:39.993 10485.760 - 10536.172: 97.0504% ( 6) 00:07:39.993 10536.172 - 10586.585: 97.0876% ( 7) 00:07:39.993 10586.585 - 10636.997: 97.1354% ( 9) 00:07:39.993 10636.997 - 10687.409: 97.1726% ( 7) 00:07:39.993 10687.409 - 10737.822: 97.2151% ( 8) 00:07:39.993 10737.822 - 10788.234: 97.2683% ( 10) 00:07:39.993 10788.234 - 10838.646: 97.3108% ( 8) 00:07:39.993 10838.646 - 10889.058: 97.3321% ( 4) 00:07:39.993 10889.058 - 10939.471: 97.3639% ( 6) 00:07:39.993 10939.471 - 10989.883: 97.4065% ( 8) 00:07:39.993 10989.883 - 11040.295: 97.4543% ( 9) 00:07:39.993 11040.295 - 11090.708: 97.5340% ( 15) 00:07:39.993 11090.708 - 11141.120: 97.6084% ( 14) 00:07:39.993 11141.120 - 11191.532: 97.6669% ( 11) 00:07:39.993 11191.532 - 11241.945: 97.7094% ( 8) 00:07:39.993 11241.945 - 11292.357: 97.7679% ( 11) 00:07:39.993 11292.357 - 11342.769: 97.8157% ( 9) 00:07:39.993 11342.769 - 11393.182: 97.8795% ( 12) 00:07:39.993 11393.182 - 11443.594: 97.9273% ( 9) 00:07:39.993 11443.594 - 11494.006: 97.9751% ( 9) 00:07:39.993 11494.006 - 11544.418: 98.0283% ( 10) 00:07:39.993 11544.418 - 11594.831: 98.0867% ( 11) 00:07:39.993 11594.831 - 11645.243: 98.1293% ( 8) 00:07:39.993 11645.243 - 11695.655: 98.1771% ( 9) 00:07:39.993 11695.655 - 11746.068: 98.2196% ( 8) 00:07:39.993 11746.068 - 11796.480: 98.2674% ( 9) 00:07:39.993 11796.480 - 11846.892: 98.2993% ( 6) 00:07:39.993 11846.892 - 11897.305: 98.3312% ( 6) 00:07:39.993 11897.305 - 11947.717: 98.3684% ( 7) 00:07:39.993 11947.717 - 11998.129: 98.4003% ( 6) 00:07:39.993 11998.129 - 12048.542: 98.4269% ( 5) 00:07:39.993 12048.542 - 12098.954: 98.4588% ( 6) 00:07:39.993 12098.954 - 12149.366: 98.4906% ( 6) 00:07:39.993 12149.366 - 12199.778: 98.5172% ( 5) 00:07:39.993 12199.778 - 12250.191: 98.5332% ( 3) 00:07:39.993 12250.191 - 12300.603: 98.5544% ( 4) 00:07:39.993 12300.603 - 12351.015: 98.5704% ( 3) 00:07:39.993 12351.015 - 12401.428: 98.5916% ( 4) 00:07:39.993 12401.428 - 12451.840: 98.6076% ( 3) 00:07:39.993 12451.840 - 12502.252: 98.6182% ( 2) 00:07:39.993 12502.252 - 12552.665: 98.6288% ( 2) 00:07:39.993 12552.665 - 12603.077: 98.6395% ( 2) 00:07:39.993 12754.314 - 12804.726: 98.6448% ( 1) 00:07:39.993 12804.726 - 12855.138: 98.6873% ( 8) 00:07:39.993 12855.138 - 12905.551: 98.6979% ( 2) 00:07:39.993 12905.551 - 13006.375: 98.7192% ( 4) 00:07:39.993 13006.375 - 13107.200: 98.7457% ( 5) 00:07:39.993 13107.200 - 13208.025: 98.7883% ( 8) 00:07:39.993 13208.025 - 13308.849: 98.8148% ( 5) 00:07:39.993 13308.849 - 13409.674: 98.8414% ( 5) 00:07:39.993 13409.674 - 13510.498: 98.8733% ( 6) 00:07:39.993 13510.498 - 13611.323: 98.8946% ( 4) 00:07:39.993 13611.323 - 13712.148: 98.9211% ( 5) 00:07:39.993 13712.148 - 13812.972: 98.9530% ( 6) 00:07:39.993 13812.972 - 13913.797: 98.9796% ( 5) 00:07:39.993 14518.745 - 14619.569: 99.0009% ( 4) 00:07:39.993 14619.569 - 14720.394: 99.0274% ( 5) 00:07:39.993 14720.394 - 14821.218: 99.0487% ( 4) 00:07:39.993 14821.218 - 14922.043: 99.0806% ( 6) 00:07:39.993 14922.043 - 15022.868: 99.1390% ( 11) 00:07:39.993 15022.868 - 15123.692: 99.1709% ( 6) 00:07:39.993 15123.692 - 15224.517: 99.2188% ( 9) 00:07:39.993 15224.517 - 15325.342: 99.2666% ( 9) 00:07:39.993 15325.342 - 15426.166: 99.3091% ( 8) 00:07:39.993 15426.166 - 15526.991: 99.3569% ( 9) 00:07:39.993 15526.991 - 15627.815: 99.3994% ( 8) 00:07:39.993 15627.815 - 15728.640: 99.4420% ( 8) 00:07:39.993 15728.640 - 15829.465: 99.4739% ( 6) 00:07:39.993 15829.465 - 15930.289: 99.5217% ( 9) 00:07:39.993 15930.289 - 16031.114: 99.5642% ( 8) 00:07:39.993 16031.114 - 16131.938: 99.6014% ( 7) 00:07:39.993 16131.938 - 16232.763: 99.6280% ( 5) 00:07:39.993 16232.763 - 16333.588: 99.6492% ( 4) 00:07:39.993 16333.588 - 16434.412: 99.6599% ( 2) 00:07:39.993 19660.800 - 19761.625: 99.6758% ( 3) 00:07:39.993 19761.625 - 19862.449: 99.6918% ( 3) 00:07:39.993 19862.449 - 19963.274: 99.7130% ( 4) 00:07:39.993 19963.274 - 20064.098: 99.7290% ( 3) 00:07:39.993 20064.098 - 20164.923: 99.7502% ( 4) 00:07:39.993 20164.923 - 20265.748: 99.7715% ( 4) 00:07:39.993 20265.748 - 20366.572: 99.7927% ( 4) 00:07:39.993 20366.572 - 20467.397: 99.8140% ( 4) 00:07:39.993 20467.397 - 20568.222: 99.8299% ( 3) 00:07:39.993 20568.222 - 20669.046: 99.8512% ( 4) 00:07:39.993 20669.046 - 20769.871: 99.8724% ( 4) 00:07:39.993 20769.871 - 20870.695: 99.8937% ( 4) 00:07:39.993 20870.695 - 20971.520: 99.9150% ( 4) 00:07:39.993 20971.520 - 21072.345: 99.9362% ( 4) 00:07:39.993 21072.345 - 21173.169: 99.9575% ( 4) 00:07:39.993 21173.169 - 21273.994: 99.9734% ( 3) 00:07:39.993 21273.994 - 21374.818: 99.9947% ( 4) 00:07:39.993 21374.818 - 21475.643: 100.0000% ( 1) 00:07:39.993 00:07:39.993 03:58:32 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:40.929 Initializing NVMe Controllers 00:07:40.929 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.929 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.929 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.929 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.929 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:40.929 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:40.929 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:40.929 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:40.929 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:40.929 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:40.929 Initialization complete. Launching workers. 00:07:40.929 ======================================================== 00:07:40.929 Latency(us) 00:07:40.929 Device Information : IOPS MiB/s Average min max 00:07:40.929 PCIE (0000:00:13.0) NSID 1 from core 0: 17646.92 206.80 7263.76 5907.06 37755.44 00:07:40.929 PCIE (0000:00:10.0) NSID 1 from core 0: 17646.92 206.80 7252.21 5680.21 36514.65 00:07:40.929 PCIE (0000:00:11.0) NSID 1 from core 0: 17646.92 206.80 7240.57 5858.52 34706.01 00:07:40.929 PCIE (0000:00:12.0) NSID 1 from core 0: 17646.92 206.80 7229.46 5946.67 33730.34 00:07:40.929 PCIE (0000:00:12.0) NSID 2 from core 0: 17646.92 206.80 7218.20 5990.07 32174.11 00:07:40.929 PCIE (0000:00:12.0) NSID 3 from core 0: 17710.86 207.55 7180.98 5929.24 25435.62 00:07:40.929 ======================================================== 00:07:40.929 Total : 105945.44 1241.55 7230.83 5680.21 37755.44 00:07:40.929 00:07:40.929 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.929 ================================================================================= 00:07:40.929 1.00000% : 6276.332us 00:07:40.929 10.00000% : 6553.600us 00:07:40.929 25.00000% : 6704.837us 00:07:40.929 50.00000% : 6856.074us 00:07:40.929 75.00000% : 7108.135us 00:07:40.929 90.00000% : 7914.732us 00:07:40.929 95.00000% : 9427.102us 00:07:40.929 98.00000% : 10536.172us 00:07:40.929 99.00000% : 11544.418us 00:07:40.929 99.50000% : 30852.332us 00:07:40.929 99.90000% : 37506.757us 00:07:40.929 99.99000% : 37910.055us 00:07:40.929 99.99900% : 37910.055us 00:07:40.929 99.99990% : 37910.055us 00:07:40.930 99.99999% : 37910.055us 00:07:40.930 00:07:40.930 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.930 ================================================================================= 00:07:40.930 1.00000% : 6150.302us 00:07:40.930 10.00000% : 6503.188us 00:07:40.930 25.00000% : 6654.425us 00:07:40.930 50.00000% : 6856.074us 00:07:40.930 75.00000% : 7208.960us 00:07:40.930 90.00000% : 8015.557us 00:07:40.930 95.00000% : 9376.689us 00:07:40.930 98.00000% : 10334.523us 00:07:40.930 99.00000% : 11695.655us 00:07:40.930 99.50000% : 29239.138us 00:07:40.930 99.90000% : 36095.212us 00:07:40.930 99.99000% : 36498.511us 00:07:40.930 99.99900% : 36700.160us 00:07:40.930 99.99990% : 36700.160us 00:07:40.930 99.99999% : 36700.160us 00:07:40.930 00:07:40.930 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.930 ================================================================================= 00:07:40.930 1.00000% : 6276.332us 00:07:40.930 10.00000% : 6553.600us 00:07:40.930 25.00000% : 6704.837us 00:07:40.930 50.00000% : 6856.074us 00:07:40.930 75.00000% : 7108.135us 00:07:40.930 90.00000% : 8166.794us 00:07:40.930 95.00000% : 9427.102us 00:07:40.930 98.00000% : 10334.523us 00:07:40.930 99.00000% : 11846.892us 00:07:40.930 99.50000% : 27625.945us 00:07:40.930 99.90000% : 34280.369us 00:07:40.930 99.99000% : 34683.668us 00:07:40.930 99.99900% : 34885.317us 00:07:40.930 99.99990% : 34885.317us 00:07:40.930 99.99999% : 34885.317us 00:07:40.930 00:07:40.930 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.930 ================================================================================= 00:07:40.930 1.00000% : 6276.332us 00:07:40.930 10.00000% : 6553.600us 00:07:40.930 25.00000% : 6704.837us 00:07:40.930 50.00000% : 6856.074us 00:07:40.930 75.00000% : 7108.135us 00:07:40.930 90.00000% : 8166.794us 00:07:40.930 95.00000% : 9225.452us 00:07:40.930 98.00000% : 10435.348us 00:07:40.930 99.00000% : 11846.892us 00:07:40.930 99.50000% : 26617.698us 00:07:40.930 99.90000% : 33473.772us 00:07:40.930 99.99000% : 33877.071us 00:07:40.930 99.99900% : 33877.071us 00:07:40.930 99.99990% : 33877.071us 00:07:40.930 99.99999% : 33877.071us 00:07:40.930 00:07:40.930 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.930 ================================================================================= 00:07:40.930 1.00000% : 6276.332us 00:07:40.930 10.00000% : 6553.600us 00:07:40.930 25.00000% : 6704.837us 00:07:40.930 50.00000% : 6856.074us 00:07:40.930 75.00000% : 7108.135us 00:07:40.930 90.00000% : 8116.382us 00:07:40.930 95.00000% : 9175.040us 00:07:40.930 98.00000% : 10435.348us 00:07:40.930 99.00000% : 11897.305us 00:07:40.930 99.50000% : 25206.154us 00:07:40.930 99.90000% : 31860.578us 00:07:40.930 99.99000% : 32263.877us 00:07:40.930 99.99900% : 32263.877us 00:07:40.930 99.99990% : 32263.877us 00:07:40.930 99.99999% : 32263.877us 00:07:40.930 00:07:40.930 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.930 ================================================================================= 00:07:40.930 1.00000% : 6276.332us 00:07:40.930 10.00000% : 6553.600us 00:07:40.930 25.00000% : 6704.837us 00:07:40.930 50.00000% : 6856.074us 00:07:40.930 75.00000% : 7108.135us 00:07:40.930 90.00000% : 8065.969us 00:07:40.930 95.00000% : 9477.514us 00:07:40.930 98.00000% : 10838.646us 00:07:40.930 99.00000% : 11393.182us 00:07:40.930 99.50000% : 18047.606us 00:07:40.930 99.90000% : 25105.329us 00:07:40.930 99.99000% : 25508.628us 00:07:40.930 99.99900% : 25508.628us 00:07:40.930 99.99990% : 25508.628us 00:07:40.930 99.99999% : 25508.628us 00:07:40.930 00:07:40.930 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.930 ============================================================================== 00:07:40.930 Range in us Cumulative IO count 00:07:40.930 5898.240 - 5923.446: 0.0057% ( 1) 00:07:40.930 5923.446 - 5948.652: 0.0113% ( 1) 00:07:40.930 5948.652 - 5973.858: 0.0170% ( 1) 00:07:40.930 5973.858 - 5999.065: 0.0340% ( 3) 00:07:40.930 5999.065 - 6024.271: 0.0396% ( 1) 00:07:40.930 6024.271 - 6049.477: 0.0566% ( 3) 00:07:40.930 6049.477 - 6074.683: 0.1019% ( 8) 00:07:40.930 6074.683 - 6099.889: 0.1698% ( 12) 00:07:40.930 6099.889 - 6125.095: 0.2264% ( 10) 00:07:40.930 6125.095 - 6150.302: 0.2944% ( 12) 00:07:40.930 6150.302 - 6175.508: 0.3793% ( 15) 00:07:40.930 6175.508 - 6200.714: 0.5208% ( 25) 00:07:40.930 6200.714 - 6225.920: 0.7077% ( 33) 00:07:40.930 6225.920 - 6251.126: 0.8832% ( 31) 00:07:40.930 6251.126 - 6276.332: 1.0926% ( 37) 00:07:40.930 6276.332 - 6301.538: 1.3587% ( 47) 00:07:40.930 6301.538 - 6326.745: 1.6870% ( 58) 00:07:40.930 6326.745 - 6351.951: 2.2871% ( 106) 00:07:40.930 6351.951 - 6377.157: 2.7400% ( 80) 00:07:40.930 6377.157 - 6402.363: 3.4250% ( 121) 00:07:40.930 6402.363 - 6427.569: 4.1723% ( 132) 00:07:40.930 6427.569 - 6452.775: 5.1404% ( 171) 00:07:40.930 6452.775 - 6503.188: 7.9087% ( 489) 00:07:40.930 6503.188 - 6553.600: 11.3734% ( 612) 00:07:40.930 6553.600 - 6604.012: 16.4232% ( 892) 00:07:40.930 6604.012 - 6654.425: 22.7355% ( 1115) 00:07:40.930 6654.425 - 6704.837: 31.1877% ( 1493) 00:07:40.930 6704.837 - 6755.249: 39.0908% ( 1396) 00:07:40.930 6755.249 - 6805.662: 46.7108% ( 1346) 00:07:40.930 6805.662 - 6856.074: 54.4667% ( 1370) 00:07:40.930 6856.074 - 6906.486: 60.1959% ( 1012) 00:07:40.930 6906.486 - 6956.898: 65.9081% ( 1009) 00:07:40.930 6956.898 - 7007.311: 69.4463% ( 625) 00:07:40.930 7007.311 - 7057.723: 73.7489% ( 760) 00:07:40.930 7057.723 - 7108.135: 76.8342% ( 545) 00:07:40.930 7108.135 - 7158.548: 79.1723% ( 413) 00:07:40.930 7158.548 - 7208.960: 81.0122% ( 325) 00:07:40.930 7208.960 - 7259.372: 82.6653% ( 292) 00:07:40.930 7259.372 - 7309.785: 83.9447% ( 226) 00:07:40.930 7309.785 - 7360.197: 84.6014% ( 116) 00:07:40.930 7360.197 - 7410.609: 85.3148% ( 126) 00:07:40.930 7410.609 - 7461.022: 86.1017% ( 139) 00:07:40.930 7461.022 - 7511.434: 86.7188% ( 109) 00:07:40.930 7511.434 - 7561.846: 87.2283% ( 90) 00:07:40.930 7561.846 - 7612.258: 87.6076% ( 67) 00:07:40.930 7612.258 - 7662.671: 88.1001% ( 87) 00:07:40.930 7662.671 - 7713.083: 88.6436% ( 96) 00:07:40.930 7713.083 - 7763.495: 89.0965% ( 80) 00:07:40.930 7763.495 - 7813.908: 89.3852% ( 51) 00:07:40.930 7813.908 - 7864.320: 89.6683% ( 50) 00:07:40.930 7864.320 - 7914.732: 90.0306% ( 64) 00:07:40.930 7914.732 - 7965.145: 90.3476% ( 56) 00:07:40.930 7965.145 - 8015.557: 90.5684% ( 39) 00:07:40.930 8015.557 - 8065.969: 90.8288% ( 46) 00:07:40.930 8065.969 - 8116.382: 91.1288% ( 53) 00:07:40.930 8116.382 - 8166.794: 91.3100% ( 32) 00:07:40.930 8166.794 - 8217.206: 91.4629% ( 27) 00:07:40.930 8217.206 - 8267.618: 91.8478% ( 68) 00:07:40.930 8267.618 - 8318.031: 92.0913% ( 43) 00:07:40.930 8318.031 - 8368.443: 92.2781% ( 33) 00:07:40.930 8368.443 - 8418.855: 92.5725% ( 52) 00:07:40.930 8418.855 - 8469.268: 92.7027% ( 23) 00:07:40.930 8469.268 - 8519.680: 92.7876% ( 15) 00:07:40.930 8519.680 - 8570.092: 93.0990% ( 55) 00:07:40.930 8570.092 - 8620.505: 93.1726% ( 13) 00:07:40.930 8620.505 - 8670.917: 93.2462% ( 13) 00:07:40.930 8670.917 - 8721.329: 93.4216% ( 31) 00:07:40.930 8721.329 - 8771.742: 93.5632% ( 25) 00:07:40.930 8771.742 - 8822.154: 93.6198% ( 10) 00:07:40.930 8822.154 - 8872.566: 93.6821% ( 11) 00:07:40.930 8872.566 - 8922.978: 93.7726% ( 16) 00:07:40.930 8922.978 - 8973.391: 93.8462% ( 13) 00:07:40.930 8973.391 - 9023.803: 93.9142% ( 12) 00:07:40.930 9023.803 - 9074.215: 93.9764% ( 11) 00:07:40.930 9074.215 - 9124.628: 94.0331% ( 10) 00:07:40.930 9124.628 - 9175.040: 94.1123% ( 14) 00:07:40.930 9175.040 - 9225.452: 94.2199% ( 19) 00:07:40.930 9225.452 - 9275.865: 94.3784% ( 28) 00:07:40.930 9275.865 - 9326.277: 94.5879% ( 37) 00:07:40.930 9326.277 - 9376.689: 94.7181% ( 23) 00:07:40.930 9376.689 - 9427.102: 95.0125% ( 52) 00:07:40.930 9427.102 - 9477.514: 95.2049% ( 34) 00:07:40.930 9477.514 - 9527.926: 95.3182% ( 20) 00:07:40.930 9527.926 - 9578.338: 95.4201% ( 18) 00:07:40.930 9578.338 - 9628.751: 95.5106% ( 16) 00:07:40.930 9628.751 - 9679.163: 95.7031% ( 34) 00:07:40.930 9679.163 - 9729.575: 95.8899% ( 33) 00:07:40.930 9729.575 - 9779.988: 96.1277% ( 42) 00:07:40.930 9779.988 - 9830.400: 96.3598% ( 41) 00:07:40.930 9830.400 - 9880.812: 96.5070% ( 26) 00:07:40.930 9880.812 - 9931.225: 96.6825% ( 31) 00:07:40.930 9931.225 - 9981.637: 96.7901% ( 19) 00:07:40.930 9981.637 - 10032.049: 96.8637% ( 13) 00:07:40.930 10032.049 - 10082.462: 97.0675% ( 36) 00:07:40.930 10082.462 - 10132.874: 97.2543% ( 33) 00:07:40.930 10132.874 - 10183.286: 97.3449% ( 16) 00:07:40.930 10183.286 - 10233.698: 97.4298% ( 15) 00:07:40.930 10233.698 - 10284.111: 97.5204% ( 16) 00:07:40.930 10284.111 - 10334.523: 97.6166% ( 17) 00:07:40.930 10334.523 - 10384.935: 97.8544% ( 42) 00:07:40.930 10384.935 - 10435.348: 97.9280% ( 13) 00:07:40.930 10435.348 - 10485.760: 97.9959% ( 12) 00:07:40.930 10485.760 - 10536.172: 98.0525% ( 10) 00:07:40.930 10536.172 - 10586.585: 98.1601% ( 19) 00:07:40.930 10586.585 - 10636.997: 98.2224% ( 11) 00:07:40.930 10636.997 - 10687.409: 98.2563% ( 6) 00:07:40.930 10687.409 - 10737.822: 98.2960% ( 7) 00:07:40.930 10737.822 - 10788.234: 98.3413% ( 8) 00:07:40.930 10788.234 - 10838.646: 98.4205% ( 14) 00:07:40.930 10838.646 - 10889.058: 98.5168% ( 17) 00:07:40.930 10889.058 - 10939.471: 98.6017% ( 15) 00:07:40.930 10939.471 - 10989.883: 98.6470% ( 8) 00:07:40.930 10989.883 - 11040.295: 98.6979% ( 9) 00:07:40.930 11040.295 - 11090.708: 98.7489% ( 9) 00:07:40.930 11090.708 - 11141.120: 98.7828% ( 6) 00:07:40.930 11141.120 - 11191.532: 98.8225% ( 7) 00:07:40.930 11191.532 - 11241.945: 98.8451% ( 4) 00:07:40.930 11241.945 - 11292.357: 98.8847% ( 7) 00:07:40.930 11292.357 - 11342.769: 98.9130% ( 5) 00:07:40.930 11342.769 - 11393.182: 98.9470% ( 6) 00:07:40.930 11393.182 - 11443.594: 98.9640% ( 3) 00:07:40.930 11443.594 - 11494.006: 98.9866% ( 4) 00:07:40.930 11494.006 - 11544.418: 99.0093% ( 4) 00:07:40.930 11544.418 - 11594.831: 99.0319% ( 4) 00:07:40.931 11594.831 - 11645.243: 99.0546% ( 4) 00:07:40.931 11645.243 - 11695.655: 99.0999% ( 8) 00:07:40.931 11695.655 - 11746.068: 99.1508% ( 9) 00:07:40.931 11746.068 - 11796.480: 99.1904% ( 7) 00:07:40.931 11796.480 - 11846.892: 99.2244% ( 6) 00:07:40.931 11846.892 - 11897.305: 99.2527% ( 5) 00:07:40.931 11897.305 - 11947.717: 99.2754% ( 4) 00:07:40.931 29440.788 - 29642.437: 99.2810% ( 1) 00:07:40.931 29642.437 - 29844.086: 99.3150% ( 6) 00:07:40.931 29844.086 - 30045.735: 99.3433% ( 5) 00:07:40.931 30045.735 - 30247.385: 99.3773% ( 6) 00:07:40.931 30247.385 - 30449.034: 99.4056% ( 5) 00:07:40.931 30449.034 - 30650.683: 99.4565% ( 9) 00:07:40.931 30650.683 - 30852.332: 99.5075% ( 9) 00:07:40.931 30852.332 - 31053.982: 99.5471% ( 7) 00:07:40.931 31053.982 - 31255.631: 99.5981% ( 9) 00:07:40.931 31255.631 - 31457.280: 99.6377% ( 7) 00:07:40.931 36095.212 - 36296.862: 99.6547% ( 3) 00:07:40.931 36296.862 - 36498.511: 99.7000% ( 8) 00:07:40.931 36498.511 - 36700.160: 99.7509% ( 9) 00:07:40.931 36700.160 - 36901.809: 99.8019% ( 9) 00:07:40.931 36901.809 - 37103.458: 99.8471% ( 8) 00:07:40.931 37103.458 - 37305.108: 99.8924% ( 8) 00:07:40.931 37305.108 - 37506.757: 99.9377% ( 8) 00:07:40.931 37506.757 - 37708.406: 99.9887% ( 9) 00:07:40.931 37708.406 - 37910.055: 100.0000% ( 2) 00:07:40.931 00:07:40.931 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.931 ============================================================================== 00:07:40.931 Range in us Cumulative IO count 00:07:40.931 5671.385 - 5696.591: 0.0057% ( 1) 00:07:40.931 5747.003 - 5772.209: 0.0113% ( 1) 00:07:40.931 5797.415 - 5822.622: 0.0170% ( 1) 00:07:40.931 5822.622 - 5847.828: 0.0283% ( 2) 00:07:40.931 5847.828 - 5873.034: 0.0453% ( 3) 00:07:40.931 5873.034 - 5898.240: 0.0736% ( 5) 00:07:40.931 5898.240 - 5923.446: 0.1076% ( 6) 00:07:40.931 5923.446 - 5948.652: 0.1472% ( 7) 00:07:40.931 5948.652 - 5973.858: 0.1868% ( 7) 00:07:40.931 5973.858 - 5999.065: 0.3227% ( 24) 00:07:40.931 5999.065 - 6024.271: 0.4189% ( 17) 00:07:40.931 6024.271 - 6049.477: 0.5718% ( 27) 00:07:40.931 6049.477 - 6074.683: 0.6510% ( 14) 00:07:40.931 6074.683 - 6099.889: 0.7869% ( 24) 00:07:40.931 6099.889 - 6125.095: 0.8945% ( 19) 00:07:40.931 6125.095 - 6150.302: 1.0303% ( 24) 00:07:40.931 6150.302 - 6175.508: 1.2625% ( 41) 00:07:40.931 6175.508 - 6200.714: 1.6021% ( 60) 00:07:40.931 6200.714 - 6225.920: 1.9248% ( 57) 00:07:40.931 6225.920 - 6251.126: 2.3211% ( 70) 00:07:40.931 6251.126 - 6276.332: 2.8193% ( 88) 00:07:40.931 6276.332 - 6301.538: 3.3062% ( 86) 00:07:40.931 6301.538 - 6326.745: 3.8043% ( 88) 00:07:40.931 6326.745 - 6351.951: 4.5573% ( 133) 00:07:40.931 6351.951 - 6377.157: 5.6159% ( 187) 00:07:40.931 6377.157 - 6402.363: 6.9180% ( 230) 00:07:40.931 6402.363 - 6427.569: 8.1182% ( 212) 00:07:40.931 6427.569 - 6452.775: 9.7883% ( 295) 00:07:40.931 6452.775 - 6503.188: 13.7455% ( 699) 00:07:40.931 6503.188 - 6553.600: 18.9708% ( 923) 00:07:40.931 6553.600 - 6604.012: 24.4962% ( 976) 00:07:40.931 6604.012 - 6654.425: 30.9783% ( 1145) 00:07:40.931 6654.425 - 6704.837: 36.7357% ( 1017) 00:07:40.931 6704.837 - 6755.249: 42.3630% ( 994) 00:07:40.931 6755.249 - 6805.662: 47.9563% ( 988) 00:07:40.931 6805.662 - 6856.074: 52.6608% ( 831) 00:07:40.931 6856.074 - 6906.486: 56.9407% ( 756) 00:07:40.931 6906.486 - 6956.898: 60.9432% ( 707) 00:07:40.931 6956.898 - 7007.311: 64.8834% ( 696) 00:07:40.931 7007.311 - 7057.723: 68.3594% ( 614) 00:07:40.931 7057.723 - 7108.135: 71.2749% ( 515) 00:07:40.931 7108.135 - 7158.548: 73.9413% ( 471) 00:07:40.931 7158.548 - 7208.960: 76.4436% ( 442) 00:07:40.931 7208.960 - 7259.372: 78.9006% ( 434) 00:07:40.931 7259.372 - 7309.785: 80.7405% ( 325) 00:07:40.931 7309.785 - 7360.197: 82.4389% ( 300) 00:07:40.931 7360.197 - 7410.609: 83.7126% ( 225) 00:07:40.931 7410.609 - 7461.022: 84.7939% ( 191) 00:07:40.931 7461.022 - 7511.434: 85.6318% ( 148) 00:07:40.931 7511.434 - 7561.846: 86.3564% ( 128) 00:07:40.931 7561.846 - 7612.258: 86.8999% ( 96) 00:07:40.931 7612.258 - 7662.671: 87.4038% ( 89) 00:07:40.931 7662.671 - 7713.083: 87.9246% ( 92) 00:07:40.931 7713.083 - 7763.495: 88.3322% ( 72) 00:07:40.931 7763.495 - 7813.908: 88.6719% ( 60) 00:07:40.931 7813.908 - 7864.320: 88.9436% ( 48) 00:07:40.931 7864.320 - 7914.732: 89.3003% ( 63) 00:07:40.931 7914.732 - 7965.145: 89.6399% ( 60) 00:07:40.931 7965.145 - 8015.557: 90.0249% ( 68) 00:07:40.931 8015.557 - 8065.969: 90.2853% ( 46) 00:07:40.931 8065.969 - 8116.382: 90.5571% ( 48) 00:07:40.931 8116.382 - 8166.794: 90.7326% ( 31) 00:07:40.931 8166.794 - 8217.206: 90.9534% ( 39) 00:07:40.931 8217.206 - 8267.618: 91.1458% ( 34) 00:07:40.931 8267.618 - 8318.031: 91.3949% ( 44) 00:07:40.931 8318.031 - 8368.443: 91.5761% ( 32) 00:07:40.931 8368.443 - 8418.855: 91.7742% ( 35) 00:07:40.931 8418.855 - 8469.268: 92.0686% ( 52) 00:07:40.931 8469.268 - 8519.680: 92.3404% ( 48) 00:07:40.931 8519.680 - 8570.092: 92.4819% ( 25) 00:07:40.931 8570.092 - 8620.505: 92.6234% ( 25) 00:07:40.931 8620.505 - 8670.917: 92.8046% ( 32) 00:07:40.931 8670.917 - 8721.329: 92.9404% ( 24) 00:07:40.931 8721.329 - 8771.742: 93.1669% ( 40) 00:07:40.931 8771.742 - 8822.154: 93.2801% ( 20) 00:07:40.931 8822.154 - 8872.566: 93.3933% ( 20) 00:07:40.931 8872.566 - 8922.978: 93.4726% ( 14) 00:07:40.931 8922.978 - 8973.391: 93.5519% ( 14) 00:07:40.931 8973.391 - 9023.803: 93.6594% ( 19) 00:07:40.931 9023.803 - 9074.215: 93.7217% ( 11) 00:07:40.931 9074.215 - 9124.628: 93.9198% ( 35) 00:07:40.931 9124.628 - 9175.040: 94.1916% ( 48) 00:07:40.931 9175.040 - 9225.452: 94.4463% ( 45) 00:07:40.931 9225.452 - 9275.865: 94.7011% ( 45) 00:07:40.931 9275.865 - 9326.277: 94.9389% ( 42) 00:07:40.931 9326.277 - 9376.689: 95.0917% ( 27) 00:07:40.931 9376.689 - 9427.102: 95.2672% ( 31) 00:07:40.931 9427.102 - 9477.514: 95.4654% ( 35) 00:07:40.931 9477.514 - 9527.926: 95.6295% ( 29) 00:07:40.931 9527.926 - 9578.338: 95.8220% ( 34) 00:07:40.931 9578.338 - 9628.751: 95.9918% ( 30) 00:07:40.931 9628.751 - 9679.163: 96.1390% ( 26) 00:07:40.931 9679.163 - 9729.575: 96.3315% ( 34) 00:07:40.931 9729.575 - 9779.988: 96.5070% ( 31) 00:07:40.931 9779.988 - 9830.400: 96.6769% ( 30) 00:07:40.931 9830.400 - 9880.812: 96.8693% ( 34) 00:07:40.931 9880.812 - 9931.225: 96.9939% ( 22) 00:07:40.931 9931.225 - 9981.637: 97.1128% ( 21) 00:07:40.931 9981.637 - 10032.049: 97.2317% ( 21) 00:07:40.931 10032.049 - 10082.462: 97.3392% ( 19) 00:07:40.931 10082.462 - 10132.874: 97.4638% ( 22) 00:07:40.931 10132.874 - 10183.286: 97.5657% ( 18) 00:07:40.931 10183.286 - 10233.698: 97.6676% ( 18) 00:07:40.931 10233.698 - 10284.111: 97.8091% ( 25) 00:07:40.931 10284.111 - 10334.523: 98.0186% ( 37) 00:07:40.931 10334.523 - 10384.935: 98.2394% ( 39) 00:07:40.931 10384.935 - 10435.348: 98.3073% ( 12) 00:07:40.931 10435.348 - 10485.760: 98.3526% ( 8) 00:07:40.931 10485.760 - 10536.172: 98.4092% ( 10) 00:07:40.931 10536.172 - 10586.585: 98.4658% ( 10) 00:07:40.931 10586.585 - 10636.997: 98.5054% ( 7) 00:07:40.931 10636.997 - 10687.409: 98.5507% ( 8) 00:07:40.931 10687.409 - 10737.822: 98.5904% ( 7) 00:07:40.931 10737.822 - 10788.234: 98.6470% ( 10) 00:07:40.931 10788.234 - 10838.646: 98.6979% ( 9) 00:07:40.931 10838.646 - 10889.058: 98.7432% ( 8) 00:07:40.931 10889.058 - 10939.471: 98.7659% ( 4) 00:07:40.931 10939.471 - 10989.883: 98.7942% ( 5) 00:07:40.931 10989.883 - 11040.295: 98.8225% ( 5) 00:07:40.931 11040.295 - 11090.708: 98.8338% ( 2) 00:07:40.931 11191.532 - 11241.945: 98.8394% ( 1) 00:07:40.931 11241.945 - 11292.357: 98.8564% ( 3) 00:07:40.931 11292.357 - 11342.769: 98.8734% ( 3) 00:07:40.931 11342.769 - 11393.182: 98.8791% ( 1) 00:07:40.931 11393.182 - 11443.594: 98.8904% ( 2) 00:07:40.931 11443.594 - 11494.006: 98.8961% ( 1) 00:07:40.931 11594.831 - 11645.243: 98.9527% ( 10) 00:07:40.931 11645.243 - 11695.655: 99.0433% ( 16) 00:07:40.931 11695.655 - 11746.068: 99.0999% ( 10) 00:07:40.931 11746.068 - 11796.480: 99.1225% ( 4) 00:07:40.931 11796.480 - 11846.892: 99.1338% ( 2) 00:07:40.931 11846.892 - 11897.305: 99.1508% ( 3) 00:07:40.931 11897.305 - 11947.717: 99.1735% ( 4) 00:07:40.931 11947.717 - 11998.129: 99.1904% ( 3) 00:07:40.931 11998.129 - 12048.542: 99.2131% ( 4) 00:07:40.931 12048.542 - 12098.954: 99.2244% ( 2) 00:07:40.931 12098.954 - 12149.366: 99.2357% ( 2) 00:07:40.931 12149.366 - 12199.778: 99.2471% ( 2) 00:07:40.931 12199.778 - 12250.191: 99.2754% ( 5) 00:07:40.931 28029.243 - 28230.892: 99.3207% ( 8) 00:07:40.931 28230.892 - 28432.542: 99.3603% ( 7) 00:07:40.931 28432.542 - 28634.191: 99.4056% ( 8) 00:07:40.931 28634.191 - 28835.840: 99.4509% ( 8) 00:07:40.931 28835.840 - 29037.489: 99.4905% ( 7) 00:07:40.931 29037.489 - 29239.138: 99.5358% ( 8) 00:07:40.931 29239.138 - 29440.788: 99.5754% ( 7) 00:07:40.931 29440.788 - 29642.437: 99.6207% ( 8) 00:07:40.931 29642.437 - 29844.086: 99.6377% ( 3) 00:07:40.931 34683.668 - 34885.317: 99.6660% ( 5) 00:07:40.931 34885.317 - 35086.966: 99.7113% ( 8) 00:07:40.931 35086.966 - 35288.615: 99.7509% ( 7) 00:07:40.931 35288.615 - 35490.265: 99.7962% ( 8) 00:07:40.931 35490.265 - 35691.914: 99.8302% ( 6) 00:07:40.931 35691.914 - 35893.563: 99.8755% ( 8) 00:07:40.931 35893.563 - 36095.212: 99.9151% ( 7) 00:07:40.931 36095.212 - 36296.862: 99.9490% ( 6) 00:07:40.931 36296.862 - 36498.511: 99.9943% ( 8) 00:07:40.931 36498.511 - 36700.160: 100.0000% ( 1) 00:07:40.931 00:07:40.931 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.931 ============================================================================== 00:07:40.931 Range in us Cumulative IO count 00:07:40.931 5847.828 - 5873.034: 0.0057% ( 1) 00:07:40.931 5898.240 - 5923.446: 0.0113% ( 1) 00:07:40.931 5923.446 - 5948.652: 0.0170% ( 1) 00:07:40.931 5973.858 - 5999.065: 0.0283% ( 2) 00:07:40.931 6024.271 - 6049.477: 0.0396% ( 2) 00:07:40.931 6049.477 - 6074.683: 0.0566% ( 3) 00:07:40.932 6074.683 - 6099.889: 0.0793% ( 4) 00:07:40.932 6099.889 - 6125.095: 0.1189% ( 7) 00:07:40.932 6125.095 - 6150.302: 0.3284% ( 37) 00:07:40.932 6150.302 - 6175.508: 0.3963% ( 12) 00:07:40.932 6175.508 - 6200.714: 0.4925% ( 17) 00:07:40.932 6200.714 - 6225.920: 0.6001% ( 19) 00:07:40.932 6225.920 - 6251.126: 0.8096% ( 37) 00:07:40.932 6251.126 - 6276.332: 1.0417% ( 41) 00:07:40.932 6276.332 - 6301.538: 1.3361% ( 52) 00:07:40.932 6301.538 - 6326.745: 1.9078% ( 101) 00:07:40.932 6326.745 - 6351.951: 2.3268% ( 74) 00:07:40.932 6351.951 - 6377.157: 2.8816% ( 98) 00:07:40.932 6377.157 - 6402.363: 3.5439% ( 117) 00:07:40.932 6402.363 - 6427.569: 4.3195% ( 137) 00:07:40.932 6427.569 - 6452.775: 5.4971% ( 208) 00:07:40.932 6452.775 - 6503.188: 8.0956% ( 459) 00:07:40.932 6503.188 - 6553.600: 11.3168% ( 569) 00:07:40.932 6553.600 - 6604.012: 16.4289% ( 903) 00:07:40.932 6604.012 - 6654.425: 24.1112% ( 1357) 00:07:40.932 6654.425 - 6704.837: 31.9860% ( 1391) 00:07:40.932 6704.837 - 6755.249: 38.9946% ( 1238) 00:07:40.932 6755.249 - 6805.662: 47.0901% ( 1430) 00:07:40.932 6805.662 - 6856.074: 54.2006% ( 1256) 00:07:40.932 6856.074 - 6906.486: 60.5865% ( 1128) 00:07:40.932 6906.486 - 6956.898: 66.2647% ( 1003) 00:07:40.932 6956.898 - 7007.311: 70.5106% ( 750) 00:07:40.932 7007.311 - 7057.723: 73.9244% ( 603) 00:07:40.932 7057.723 - 7108.135: 77.0890% ( 559) 00:07:40.932 7108.135 - 7158.548: 79.8970% ( 496) 00:07:40.932 7158.548 - 7208.960: 81.7482% ( 327) 00:07:40.932 7208.960 - 7259.372: 82.9031% ( 204) 00:07:40.932 7259.372 - 7309.785: 84.1486% ( 220) 00:07:40.932 7309.785 - 7360.197: 85.1279% ( 173) 00:07:40.932 7360.197 - 7410.609: 86.1243% ( 176) 00:07:40.932 7410.609 - 7461.022: 87.0245% ( 159) 00:07:40.932 7461.022 - 7511.434: 87.5340% ( 90) 00:07:40.932 7511.434 - 7561.846: 87.9019% ( 65) 00:07:40.932 7561.846 - 7612.258: 88.3096% ( 72) 00:07:40.932 7612.258 - 7662.671: 88.4567% ( 26) 00:07:40.932 7662.671 - 7713.083: 88.5870% ( 23) 00:07:40.932 7713.083 - 7763.495: 88.7002% ( 20) 00:07:40.932 7763.495 - 7813.908: 88.7908% ( 16) 00:07:40.932 7813.908 - 7864.320: 88.8927% ( 18) 00:07:40.932 7864.320 - 7914.732: 89.0172% ( 22) 00:07:40.932 7914.732 - 7965.145: 89.2889% ( 48) 00:07:40.932 7965.145 - 8015.557: 89.4928% ( 36) 00:07:40.932 8015.557 - 8065.969: 89.6683% ( 31) 00:07:40.932 8065.969 - 8116.382: 89.8721% ( 36) 00:07:40.932 8116.382 - 8166.794: 90.1042% ( 41) 00:07:40.932 8166.794 - 8217.206: 90.4552% ( 62) 00:07:40.932 8217.206 - 8267.618: 90.8175% ( 64) 00:07:40.932 8267.618 - 8318.031: 91.0836% ( 47) 00:07:40.932 8318.031 - 8368.443: 91.2647% ( 32) 00:07:40.932 8368.443 - 8418.855: 91.5082% ( 43) 00:07:40.932 8418.855 - 8469.268: 91.7176% ( 37) 00:07:40.932 8469.268 - 8519.680: 91.8705% ( 27) 00:07:40.932 8519.680 - 8570.092: 92.0403% ( 30) 00:07:40.932 8570.092 - 8620.505: 92.3573% ( 56) 00:07:40.932 8620.505 - 8670.917: 92.4932% ( 24) 00:07:40.932 8670.917 - 8721.329: 92.5894% ( 17) 00:07:40.932 8721.329 - 8771.742: 92.6970% ( 19) 00:07:40.932 8771.742 - 8822.154: 92.8102% ( 20) 00:07:40.932 8822.154 - 8872.566: 92.9291% ( 21) 00:07:40.932 8872.566 - 8922.978: 93.1442% ( 38) 00:07:40.932 8922.978 - 8973.391: 93.6255% ( 85) 00:07:40.932 8973.391 - 9023.803: 93.9085% ( 50) 00:07:40.932 9023.803 - 9074.215: 94.0444% ( 24) 00:07:40.932 9074.215 - 9124.628: 94.1803% ( 24) 00:07:40.932 9124.628 - 9175.040: 94.3105% ( 23) 00:07:40.932 9175.040 - 9225.452: 94.4463% ( 24) 00:07:40.932 9225.452 - 9275.865: 94.6332% ( 33) 00:07:40.932 9275.865 - 9326.277: 94.8030% ( 30) 00:07:40.932 9326.277 - 9376.689: 94.9955% ( 34) 00:07:40.932 9376.689 - 9427.102: 95.3068% ( 55) 00:07:40.932 9427.102 - 9477.514: 95.5559% ( 44) 00:07:40.932 9477.514 - 9527.926: 95.6805% ( 22) 00:07:40.932 9527.926 - 9578.338: 95.8050% ( 22) 00:07:40.932 9578.338 - 9628.751: 95.9635% ( 28) 00:07:40.932 9628.751 - 9679.163: 96.1334% ( 30) 00:07:40.932 9679.163 - 9729.575: 96.2636% ( 23) 00:07:40.932 9729.575 - 9779.988: 96.4051% ( 25) 00:07:40.932 9779.988 - 9830.400: 96.5240% ( 21) 00:07:40.932 9830.400 - 9880.812: 96.6486% ( 22) 00:07:40.932 9880.812 - 9931.225: 96.8750% ( 40) 00:07:40.932 9931.225 - 9981.637: 97.0958% ( 39) 00:07:40.932 9981.637 - 10032.049: 97.1864% ( 16) 00:07:40.932 10032.049 - 10082.462: 97.2883% ( 18) 00:07:40.932 10082.462 - 10132.874: 97.3675% ( 14) 00:07:40.932 10132.874 - 10183.286: 97.6223% ( 45) 00:07:40.932 10183.286 - 10233.698: 97.7582% ( 24) 00:07:40.932 10233.698 - 10284.111: 97.9167% ( 28) 00:07:40.932 10284.111 - 10334.523: 98.0469% ( 23) 00:07:40.932 10334.523 - 10384.935: 98.1771% ( 23) 00:07:40.932 10384.935 - 10435.348: 98.2903% ( 20) 00:07:40.932 10435.348 - 10485.760: 98.3696% ( 14) 00:07:40.932 10485.760 - 10536.172: 98.4771% ( 19) 00:07:40.932 10536.172 - 10586.585: 98.5790% ( 18) 00:07:40.932 10586.585 - 10636.997: 98.6696% ( 16) 00:07:40.932 10636.997 - 10687.409: 98.7375% ( 12) 00:07:40.932 10687.409 - 10737.822: 98.8338% ( 17) 00:07:40.932 10737.822 - 10788.234: 98.8508% ( 3) 00:07:40.932 10788.234 - 10838.646: 98.8734% ( 4) 00:07:40.932 10838.646 - 10889.058: 98.8791% ( 1) 00:07:40.932 10889.058 - 10939.471: 98.8847% ( 1) 00:07:40.932 10939.471 - 10989.883: 98.8961% ( 2) 00:07:40.932 10989.883 - 11040.295: 98.9074% ( 2) 00:07:40.932 11040.295 - 11090.708: 98.9130% ( 1) 00:07:40.932 11645.243 - 11695.655: 98.9413% ( 5) 00:07:40.932 11695.655 - 11746.068: 98.9640% ( 4) 00:07:40.932 11746.068 - 11796.480: 98.9980% ( 6) 00:07:40.932 11796.480 - 11846.892: 99.2188% ( 39) 00:07:40.932 11846.892 - 11897.305: 99.2357% ( 3) 00:07:40.932 11897.305 - 11947.717: 99.2527% ( 3) 00:07:40.932 11947.717 - 11998.129: 99.2640% ( 2) 00:07:40.932 11998.129 - 12048.542: 99.2754% ( 2) 00:07:40.932 26617.698 - 26819.348: 99.3263% ( 9) 00:07:40.932 26819.348 - 27020.997: 99.3716% ( 8) 00:07:40.932 27020.997 - 27222.646: 99.4169% ( 8) 00:07:40.932 27222.646 - 27424.295: 99.4622% ( 8) 00:07:40.932 27424.295 - 27625.945: 99.5075% ( 8) 00:07:40.932 27625.945 - 27827.594: 99.5584% ( 9) 00:07:40.932 27827.594 - 28029.243: 99.6037% ( 8) 00:07:40.932 28029.243 - 28230.892: 99.6377% ( 6) 00:07:40.932 33070.474 - 33272.123: 99.6830% ( 8) 00:07:40.932 33272.123 - 33473.772: 99.7283% ( 8) 00:07:40.932 33473.772 - 33675.422: 99.7679% ( 7) 00:07:40.932 33675.422 - 33877.071: 99.8132% ( 8) 00:07:40.932 33877.071 - 34078.720: 99.8585% ( 8) 00:07:40.932 34078.720 - 34280.369: 99.9038% ( 8) 00:07:40.932 34280.369 - 34482.018: 99.9490% ( 8) 00:07:40.932 34482.018 - 34683.668: 99.9943% ( 8) 00:07:40.932 34683.668 - 34885.317: 100.0000% ( 1) 00:07:40.932 00:07:40.932 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.932 ============================================================================== 00:07:40.932 Range in us Cumulative IO count 00:07:40.932 5923.446 - 5948.652: 0.0113% ( 2) 00:07:40.932 5948.652 - 5973.858: 0.0170% ( 1) 00:07:40.932 5973.858 - 5999.065: 0.0283% ( 2) 00:07:40.932 5999.065 - 6024.271: 0.0510% ( 4) 00:07:40.932 6024.271 - 6049.477: 0.0679% ( 3) 00:07:40.932 6049.477 - 6074.683: 0.1019% ( 6) 00:07:40.932 6074.683 - 6099.889: 0.1076% ( 1) 00:07:40.932 6099.889 - 6125.095: 0.1529% ( 8) 00:07:40.932 6125.095 - 6150.302: 0.2661% ( 20) 00:07:40.932 6150.302 - 6175.508: 0.4246% ( 28) 00:07:40.932 6175.508 - 6200.714: 0.4982% ( 13) 00:07:40.932 6200.714 - 6225.920: 0.5831% ( 15) 00:07:40.932 6225.920 - 6251.126: 0.7869% ( 36) 00:07:40.932 6251.126 - 6276.332: 1.0587% ( 48) 00:07:40.932 6276.332 - 6301.538: 1.2625% ( 36) 00:07:40.932 6301.538 - 6326.745: 1.4946% ( 41) 00:07:40.932 6326.745 - 6351.951: 1.8286% ( 59) 00:07:40.932 6351.951 - 6377.157: 2.3381% ( 90) 00:07:40.932 6377.157 - 6402.363: 2.9325% ( 105) 00:07:40.932 6402.363 - 6427.569: 3.7308% ( 141) 00:07:40.932 6427.569 - 6452.775: 4.6592% ( 164) 00:07:40.932 6452.775 - 6503.188: 7.2407% ( 456) 00:07:40.932 6503.188 - 6553.600: 11.3281% ( 722) 00:07:40.932 6553.600 - 6604.012: 16.3779% ( 892) 00:07:40.932 6604.012 - 6654.425: 22.6562% ( 1109) 00:07:40.932 6654.425 - 6704.837: 30.5593% ( 1396) 00:07:40.932 6704.837 - 6755.249: 38.9210% ( 1477) 00:07:40.932 6755.249 - 6805.662: 46.6542% ( 1366) 00:07:40.932 6805.662 - 6856.074: 54.2063% ( 1334) 00:07:40.932 6856.074 - 6906.486: 60.6205% ( 1133) 00:07:40.932 6906.486 - 6956.898: 66.0779% ( 964) 00:07:40.932 6956.898 - 7007.311: 70.9862% ( 867) 00:07:40.932 7007.311 - 7057.723: 74.2188% ( 571) 00:07:40.932 7057.723 - 7108.135: 77.2588% ( 537) 00:07:40.932 7108.135 - 7158.548: 79.8687% ( 461) 00:07:40.932 7158.548 - 7208.960: 81.7935% ( 340) 00:07:40.932 7208.960 - 7259.372: 83.4522% ( 293) 00:07:40.932 7259.372 - 7309.785: 84.8958% ( 255) 00:07:40.932 7309.785 - 7360.197: 85.8016% ( 160) 00:07:40.932 7360.197 - 7410.609: 86.5093% ( 125) 00:07:40.932 7410.609 - 7461.022: 87.3585% ( 150) 00:07:40.932 7461.022 - 7511.434: 87.8340% ( 84) 00:07:40.932 7511.434 - 7561.846: 88.1963% ( 64) 00:07:40.932 7561.846 - 7612.258: 88.3888% ( 34) 00:07:40.932 7612.258 - 7662.671: 88.5587% ( 30) 00:07:40.932 7662.671 - 7713.083: 88.7002% ( 25) 00:07:40.932 7713.083 - 7763.495: 88.8134% ( 20) 00:07:40.932 7763.495 - 7813.908: 88.9323% ( 21) 00:07:40.932 7813.908 - 7864.320: 89.1474% ( 38) 00:07:40.932 7864.320 - 7914.732: 89.2493% ( 18) 00:07:40.932 7914.732 - 7965.145: 89.3625% ( 20) 00:07:40.932 7965.145 - 8015.557: 89.4644% ( 18) 00:07:40.932 8015.557 - 8065.969: 89.6909% ( 40) 00:07:40.932 8065.969 - 8116.382: 89.9173% ( 40) 00:07:40.932 8116.382 - 8166.794: 90.1381% ( 39) 00:07:40.932 8166.794 - 8217.206: 90.5627% ( 75) 00:07:40.932 8217.206 - 8267.618: 90.8401% ( 49) 00:07:40.932 8267.618 - 8318.031: 91.0496% ( 37) 00:07:40.932 8318.031 - 8368.443: 91.2477% ( 35) 00:07:40.932 8368.443 - 8418.855: 91.5478% ( 53) 00:07:40.932 8418.855 - 8469.268: 91.7006% ( 27) 00:07:40.932 8469.268 - 8519.680: 92.0969% ( 70) 00:07:40.932 8519.680 - 8570.092: 92.3404% ( 43) 00:07:40.932 8570.092 - 8620.505: 92.5442% ( 36) 00:07:40.932 8620.505 - 8670.917: 92.7819% ( 42) 00:07:40.933 8670.917 - 8721.329: 93.1669% ( 68) 00:07:40.933 8721.329 - 8771.742: 93.3311% ( 29) 00:07:40.933 8771.742 - 8822.154: 93.5575% ( 40) 00:07:40.933 8822.154 - 8872.566: 93.7443% ( 33) 00:07:40.933 8872.566 - 8922.978: 93.8745% ( 23) 00:07:40.933 8922.978 - 8973.391: 94.0048% ( 23) 00:07:40.933 8973.391 - 9023.803: 94.1633% ( 28) 00:07:40.933 9023.803 - 9074.215: 94.3784% ( 38) 00:07:40.933 9074.215 - 9124.628: 94.5539% ( 31) 00:07:40.933 9124.628 - 9175.040: 94.7294% ( 31) 00:07:40.933 9175.040 - 9225.452: 95.0125% ( 50) 00:07:40.933 9225.452 - 9275.865: 95.2106% ( 35) 00:07:40.933 9275.865 - 9326.277: 95.3578% ( 26) 00:07:40.933 9326.277 - 9376.689: 95.4937% ( 24) 00:07:40.933 9376.689 - 9427.102: 95.5956% ( 18) 00:07:40.933 9427.102 - 9477.514: 95.6861% ( 16) 00:07:40.933 9477.514 - 9527.926: 95.7937% ( 19) 00:07:40.933 9527.926 - 9578.338: 95.8899% ( 17) 00:07:40.933 9578.338 - 9628.751: 96.0654% ( 31) 00:07:40.933 9628.751 - 9679.163: 96.3202% ( 45) 00:07:40.933 9679.163 - 9729.575: 96.5636% ( 43) 00:07:40.933 9729.575 - 9779.988: 96.7052% ( 25) 00:07:40.933 9779.988 - 9830.400: 96.8807% ( 31) 00:07:40.933 9830.400 - 9880.812: 96.9826% ( 18) 00:07:40.933 9880.812 - 9931.225: 97.0448% ( 11) 00:07:40.933 9931.225 - 9981.637: 97.0901% ( 8) 00:07:40.933 9981.637 - 10032.049: 97.1637% ( 13) 00:07:40.933 10032.049 - 10082.462: 97.2373% ( 13) 00:07:40.933 10082.462 - 10132.874: 97.3053% ( 12) 00:07:40.933 10132.874 - 10183.286: 97.3902% ( 15) 00:07:40.933 10183.286 - 10233.698: 97.5091% ( 21) 00:07:40.933 10233.698 - 10284.111: 97.6902% ( 32) 00:07:40.933 10284.111 - 10334.523: 97.8827% ( 34) 00:07:40.933 10334.523 - 10384.935: 97.9846% ( 18) 00:07:40.933 10384.935 - 10435.348: 98.1601% ( 31) 00:07:40.933 10435.348 - 10485.760: 98.2790% ( 21) 00:07:40.933 10485.760 - 10536.172: 98.4262% ( 26) 00:07:40.933 10536.172 - 10586.585: 98.6526% ( 40) 00:07:40.933 10586.585 - 10636.997: 98.7206% ( 12) 00:07:40.933 10636.997 - 10687.409: 98.7489% ( 5) 00:07:40.933 10687.409 - 10737.822: 98.7772% ( 5) 00:07:40.933 10737.822 - 10788.234: 98.8111% ( 6) 00:07:40.933 10788.234 - 10838.646: 98.8394% ( 5) 00:07:40.933 10838.646 - 10889.058: 98.8621% ( 4) 00:07:40.933 10889.058 - 10939.471: 98.8734% ( 2) 00:07:40.933 10939.471 - 10989.883: 98.8904% ( 3) 00:07:40.933 10989.883 - 11040.295: 98.9074% ( 3) 00:07:40.933 11040.295 - 11090.708: 98.9130% ( 1) 00:07:40.933 11645.243 - 11695.655: 98.9187% ( 1) 00:07:40.933 11695.655 - 11746.068: 98.9470% ( 5) 00:07:40.933 11746.068 - 11796.480: 98.9753% ( 5) 00:07:40.933 11796.480 - 11846.892: 99.0206% ( 8) 00:07:40.933 11846.892 - 11897.305: 99.2131% ( 34) 00:07:40.933 11897.305 - 11947.717: 99.2414% ( 5) 00:07:40.933 11947.717 - 11998.129: 99.2640% ( 4) 00:07:40.933 11998.129 - 12048.542: 99.2754% ( 2) 00:07:40.933 25609.452 - 25710.277: 99.2980% ( 4) 00:07:40.933 25710.277 - 25811.102: 99.3207% ( 4) 00:07:40.933 25811.102 - 26012.751: 99.3716% ( 9) 00:07:40.933 26012.751 - 26214.400: 99.4169% ( 8) 00:07:40.933 26214.400 - 26416.049: 99.4622% ( 8) 00:07:40.933 26416.049 - 26617.698: 99.5131% ( 9) 00:07:40.933 26617.698 - 26819.348: 99.5584% ( 8) 00:07:40.933 26819.348 - 27020.997: 99.6037% ( 8) 00:07:40.933 27020.997 - 27222.646: 99.6377% ( 6) 00:07:40.933 32062.228 - 32263.877: 99.6660% ( 5) 00:07:40.933 32263.877 - 32465.526: 99.7113% ( 8) 00:07:40.933 32465.526 - 32667.175: 99.7622% ( 9) 00:07:40.933 32667.175 - 32868.825: 99.8075% ( 8) 00:07:40.933 32868.825 - 33070.474: 99.8528% ( 8) 00:07:40.933 33070.474 - 33272.123: 99.8924% ( 7) 00:07:40.933 33272.123 - 33473.772: 99.9377% ( 8) 00:07:40.933 33473.772 - 33675.422: 99.9830% ( 8) 00:07:40.933 33675.422 - 33877.071: 100.0000% ( 3) 00:07:40.933 00:07:40.933 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.933 ============================================================================== 00:07:40.933 Range in us Cumulative IO count 00:07:40.933 5973.858 - 5999.065: 0.0113% ( 2) 00:07:40.933 5999.065 - 6024.271: 0.0340% ( 4) 00:07:40.933 6024.271 - 6049.477: 0.0566% ( 4) 00:07:40.933 6049.477 - 6074.683: 0.1019% ( 8) 00:07:40.933 6074.683 - 6099.889: 0.1642% ( 11) 00:07:40.933 6099.889 - 6125.095: 0.2208% ( 10) 00:07:40.933 6125.095 - 6150.302: 0.3057% ( 15) 00:07:40.933 6150.302 - 6175.508: 0.3680% ( 11) 00:07:40.933 6175.508 - 6200.714: 0.4812% ( 20) 00:07:40.933 6200.714 - 6225.920: 0.6963% ( 38) 00:07:40.933 6225.920 - 6251.126: 0.8775% ( 32) 00:07:40.933 6251.126 - 6276.332: 1.0983% ( 39) 00:07:40.933 6276.332 - 6301.538: 1.3474% ( 44) 00:07:40.933 6301.538 - 6326.745: 1.6814% ( 59) 00:07:40.933 6326.745 - 6351.951: 2.0267% ( 61) 00:07:40.933 6351.951 - 6377.157: 2.5872% ( 99) 00:07:40.933 6377.157 - 6402.363: 3.1703% ( 103) 00:07:40.933 6402.363 - 6427.569: 4.2233% ( 186) 00:07:40.933 6427.569 - 6452.775: 5.0894% ( 153) 00:07:40.933 6452.775 - 6503.188: 7.2690% ( 385) 00:07:40.933 6503.188 - 6553.600: 11.2206% ( 698) 00:07:40.933 6553.600 - 6604.012: 16.5025% ( 933) 00:07:40.933 6604.012 - 6654.425: 23.1148% ( 1168) 00:07:40.933 6654.425 - 6704.837: 31.2274% ( 1433) 00:07:40.933 6704.837 - 6755.249: 39.5947% ( 1478) 00:07:40.933 6755.249 - 6805.662: 46.8750% ( 1286) 00:07:40.933 6805.662 - 6856.074: 54.1214% ( 1280) 00:07:40.933 6856.074 - 6906.486: 61.2998% ( 1268) 00:07:40.933 6906.486 - 6956.898: 66.3553% ( 893) 00:07:40.933 6956.898 - 7007.311: 70.4540% ( 724) 00:07:40.933 7007.311 - 7057.723: 74.2980% ( 679) 00:07:40.933 7057.723 - 7108.135: 77.1060% ( 496) 00:07:40.933 7108.135 - 7158.548: 79.5630% ( 434) 00:07:40.933 7158.548 - 7208.960: 81.5784% ( 356) 00:07:40.933 7208.960 - 7259.372: 82.8974% ( 233) 00:07:40.933 7259.372 - 7309.785: 83.9731% ( 190) 00:07:40.933 7309.785 - 7360.197: 85.2298% ( 222) 00:07:40.933 7360.197 - 7410.609: 86.1583% ( 164) 00:07:40.933 7410.609 - 7461.022: 86.8207% ( 117) 00:07:40.933 7461.022 - 7511.434: 87.4943% ( 119) 00:07:40.933 7511.434 - 7561.846: 88.0378% ( 96) 00:07:40.933 7561.846 - 7612.258: 88.5417% ( 89) 00:07:40.933 7612.258 - 7662.671: 88.6889% ( 26) 00:07:40.933 7662.671 - 7713.083: 88.8134% ( 22) 00:07:40.933 7713.083 - 7763.495: 88.9040% ( 16) 00:07:40.933 7763.495 - 7813.908: 89.0115% ( 19) 00:07:40.933 7813.908 - 7864.320: 89.1361% ( 22) 00:07:40.933 7864.320 - 7914.732: 89.4305% ( 52) 00:07:40.933 7914.732 - 7965.145: 89.5947% ( 29) 00:07:40.933 7965.145 - 8015.557: 89.7588% ( 29) 00:07:40.933 8015.557 - 8065.969: 89.9457% ( 33) 00:07:40.933 8065.969 - 8116.382: 90.2061% ( 46) 00:07:40.933 8116.382 - 8166.794: 90.4438% ( 42) 00:07:40.933 8166.794 - 8217.206: 90.7779% ( 59) 00:07:40.933 8217.206 - 8267.618: 91.1005% ( 57) 00:07:40.933 8267.618 - 8318.031: 91.3440% ( 43) 00:07:40.933 8318.031 - 8368.443: 91.6950% ( 62) 00:07:40.933 8368.443 - 8418.855: 91.9327% ( 42) 00:07:40.933 8418.855 - 8469.268: 92.2158% ( 50) 00:07:40.933 8469.268 - 8519.680: 92.5045% ( 51) 00:07:40.933 8519.680 - 8570.092: 92.8046% ( 53) 00:07:40.933 8570.092 - 8620.505: 92.9688% ( 29) 00:07:40.933 8620.505 - 8670.917: 93.1159% ( 26) 00:07:40.933 8670.917 - 8721.329: 93.2688% ( 27) 00:07:40.933 8721.329 - 8771.742: 93.4500% ( 32) 00:07:40.933 8771.742 - 8822.154: 93.5858% ( 24) 00:07:40.933 8822.154 - 8872.566: 93.7840% ( 35) 00:07:40.933 8872.566 - 8922.978: 93.9764% ( 34) 00:07:40.933 8922.978 - 8973.391: 94.1463% ( 30) 00:07:40.933 8973.391 - 9023.803: 94.4463% ( 53) 00:07:40.933 9023.803 - 9074.215: 94.7067% ( 46) 00:07:40.933 9074.215 - 9124.628: 94.8879% ( 32) 00:07:40.933 9124.628 - 9175.040: 95.0125% ( 22) 00:07:40.933 9175.040 - 9225.452: 95.0917% ( 14) 00:07:40.933 9225.452 - 9275.865: 95.1880% ( 17) 00:07:40.933 9275.865 - 9326.277: 95.3578% ( 30) 00:07:40.933 9326.277 - 9376.689: 95.4597% ( 18) 00:07:40.933 9376.689 - 9427.102: 95.5956% ( 24) 00:07:40.933 9427.102 - 9477.514: 95.6748% ( 14) 00:07:40.933 9477.514 - 9527.926: 95.7314% ( 10) 00:07:40.933 9527.926 - 9578.338: 95.8107% ( 14) 00:07:40.933 9578.338 - 9628.751: 95.9183% ( 19) 00:07:40.933 9628.751 - 9679.163: 96.0485% ( 23) 00:07:40.933 9679.163 - 9729.575: 96.1390% ( 16) 00:07:40.933 9729.575 - 9779.988: 96.3768% ( 42) 00:07:40.933 9779.988 - 9830.400: 96.4731% ( 17) 00:07:40.933 9830.400 - 9880.812: 96.5410% ( 12) 00:07:40.933 9880.812 - 9931.225: 96.6372% ( 17) 00:07:40.933 9931.225 - 9981.637: 96.7618% ( 22) 00:07:40.933 9981.637 - 10032.049: 96.8863% ( 22) 00:07:40.933 10032.049 - 10082.462: 96.9826% ( 17) 00:07:40.933 10082.462 - 10132.874: 97.1241% ( 25) 00:07:40.933 10132.874 - 10183.286: 97.3449% ( 39) 00:07:40.933 10183.286 - 10233.698: 97.5487% ( 36) 00:07:40.933 10233.698 - 10284.111: 97.6506% ( 18) 00:07:40.933 10284.111 - 10334.523: 97.7468% ( 17) 00:07:40.933 10334.523 - 10384.935: 97.9053% ( 28) 00:07:40.933 10384.935 - 10435.348: 98.0299% ( 22) 00:07:40.933 10435.348 - 10485.760: 98.1601% ( 23) 00:07:40.933 10485.760 - 10536.172: 98.2450% ( 15) 00:07:40.933 10536.172 - 10586.585: 98.3016% ( 10) 00:07:40.933 10586.585 - 10636.997: 98.3582% ( 10) 00:07:40.933 10636.997 - 10687.409: 98.4375% ( 14) 00:07:40.933 10687.409 - 10737.822: 98.5507% ( 20) 00:07:40.933 10737.822 - 10788.234: 98.6243% ( 13) 00:07:40.933 10788.234 - 10838.646: 98.6979% ( 13) 00:07:40.933 10838.646 - 10889.058: 98.7659% ( 12) 00:07:40.933 10889.058 - 10939.471: 98.8338% ( 12) 00:07:40.933 10939.471 - 10989.883: 98.8961% ( 11) 00:07:40.933 10989.883 - 11040.295: 98.9130% ( 3) 00:07:40.933 11695.655 - 11746.068: 98.9300% ( 3) 00:07:40.933 11746.068 - 11796.480: 98.9583% ( 5) 00:07:40.933 11796.480 - 11846.892: 98.9866% ( 5) 00:07:40.933 11846.892 - 11897.305: 99.1791% ( 34) 00:07:40.933 11897.305 - 11947.717: 99.2188% ( 7) 00:07:40.933 11947.717 - 11998.129: 99.2357% ( 3) 00:07:40.933 11998.129 - 12048.542: 99.2584% ( 4) 00:07:40.933 12048.542 - 12098.954: 99.2754% ( 3) 00:07:40.933 24097.083 - 24197.908: 99.2867% ( 2) 00:07:40.933 24197.908 - 24298.732: 99.3093% ( 4) 00:07:40.933 24298.732 - 24399.557: 99.3263% ( 3) 00:07:40.934 24399.557 - 24500.382: 99.3490% ( 4) 00:07:40.934 24500.382 - 24601.206: 99.3716% ( 4) 00:07:40.934 24601.206 - 24702.031: 99.3999% ( 5) 00:07:40.934 24702.031 - 24802.855: 99.4226% ( 4) 00:07:40.934 24802.855 - 24903.680: 99.4452% ( 4) 00:07:40.934 24903.680 - 25004.505: 99.4678% ( 4) 00:07:40.934 25004.505 - 25105.329: 99.4962% ( 5) 00:07:40.934 25105.329 - 25206.154: 99.5188% ( 4) 00:07:40.934 25206.154 - 25306.978: 99.5414% ( 4) 00:07:40.934 25306.978 - 25407.803: 99.5641% ( 4) 00:07:40.934 25407.803 - 25508.628: 99.5867% ( 4) 00:07:40.934 25508.628 - 25609.452: 99.6094% ( 4) 00:07:40.934 25609.452 - 25710.277: 99.6320% ( 4) 00:07:40.934 25710.277 - 25811.102: 99.6377% ( 1) 00:07:40.934 30449.034 - 30650.683: 99.6433% ( 1) 00:07:40.934 30650.683 - 30852.332: 99.6943% ( 9) 00:07:40.934 30852.332 - 31053.982: 99.7339% ( 7) 00:07:40.934 31053.982 - 31255.631: 99.7849% ( 9) 00:07:40.934 31255.631 - 31457.280: 99.8302% ( 8) 00:07:40.934 31457.280 - 31658.929: 99.8811% ( 9) 00:07:40.934 31658.929 - 31860.578: 99.9264% ( 8) 00:07:40.934 31860.578 - 32062.228: 99.9717% ( 8) 00:07:40.934 32062.228 - 32263.877: 100.0000% ( 5) 00:07:40.934 00:07:40.934 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.934 ============================================================================== 00:07:40.934 Range in us Cumulative IO count 00:07:40.934 5923.446 - 5948.652: 0.0056% ( 1) 00:07:40.934 5948.652 - 5973.858: 0.0282% ( 4) 00:07:40.934 5973.858 - 5999.065: 0.0564% ( 5) 00:07:40.934 5999.065 - 6024.271: 0.0733% ( 3) 00:07:40.934 6024.271 - 6049.477: 0.0903% ( 3) 00:07:40.934 6049.477 - 6074.683: 0.1128% ( 4) 00:07:40.934 6074.683 - 6099.889: 0.1354% ( 4) 00:07:40.934 6099.889 - 6125.095: 0.1861% ( 9) 00:07:40.934 6125.095 - 6150.302: 0.2369% ( 9) 00:07:40.934 6150.302 - 6175.508: 0.4061% ( 30) 00:07:40.934 6175.508 - 6200.714: 0.5246% ( 21) 00:07:40.934 6200.714 - 6225.920: 0.6825% ( 28) 00:07:40.934 6225.920 - 6251.126: 0.8348% ( 27) 00:07:40.934 6251.126 - 6276.332: 1.0718% ( 42) 00:07:40.934 6276.332 - 6301.538: 1.3256% ( 45) 00:07:40.934 6301.538 - 6326.745: 1.6697% ( 61) 00:07:40.934 6326.745 - 6351.951: 2.0081% ( 60) 00:07:40.934 6351.951 - 6377.157: 2.5440% ( 95) 00:07:40.934 6377.157 - 6402.363: 3.1758% ( 112) 00:07:40.934 6402.363 - 6427.569: 3.9486% ( 137) 00:07:40.934 6427.569 - 6452.775: 5.0654% ( 198) 00:07:40.934 6452.775 - 6503.188: 7.8802% ( 499) 00:07:40.934 6503.188 - 6553.600: 11.8739% ( 708) 00:07:40.934 6553.600 - 6604.012: 17.4695% ( 992) 00:07:40.934 6604.012 - 6654.425: 24.3400% ( 1218) 00:07:40.934 6654.425 - 6704.837: 31.4982% ( 1269) 00:07:40.934 6704.837 - 6755.249: 39.4235% ( 1405) 00:07:40.934 6755.249 - 6805.662: 46.7171% ( 1293) 00:07:40.934 6805.662 - 6856.074: 54.2532% ( 1336) 00:07:40.934 6856.074 - 6906.486: 60.2268% ( 1059) 00:07:40.934 6906.486 - 6956.898: 65.1737% ( 877) 00:07:40.934 6956.898 - 7007.311: 69.6694% ( 797) 00:07:40.934 7007.311 - 7057.723: 73.2119% ( 628) 00:07:40.934 7057.723 - 7108.135: 76.4440% ( 573) 00:07:40.934 7108.135 - 7158.548: 78.8245% ( 422) 00:07:40.934 7158.548 - 7208.960: 80.7875% ( 348) 00:07:40.934 7208.960 - 7259.372: 82.4289% ( 291) 00:07:40.934 7259.372 - 7309.785: 83.6079% ( 209) 00:07:40.934 7309.785 - 7360.197: 84.5668% ( 170) 00:07:40.934 7360.197 - 7410.609: 85.4468% ( 156) 00:07:40.934 7410.609 - 7461.022: 86.0278% ( 103) 00:07:40.934 7461.022 - 7511.434: 86.6652% ( 113) 00:07:40.934 7511.434 - 7561.846: 87.1446% ( 85) 00:07:40.934 7561.846 - 7612.258: 87.6128% ( 83) 00:07:40.934 7612.258 - 7662.671: 88.0472% ( 77) 00:07:40.934 7662.671 - 7713.083: 88.3461% ( 53) 00:07:40.934 7713.083 - 7763.495: 88.5323% ( 33) 00:07:40.934 7763.495 - 7813.908: 88.7015% ( 30) 00:07:40.934 7813.908 - 7864.320: 88.9440% ( 43) 00:07:40.934 7864.320 - 7914.732: 89.3558% ( 73) 00:07:40.934 7914.732 - 7965.145: 89.7845% ( 76) 00:07:40.934 7965.145 - 8015.557: 89.9819% ( 35) 00:07:40.934 8015.557 - 8065.969: 90.3373% ( 63) 00:07:40.934 8065.969 - 8116.382: 90.6983% ( 64) 00:07:40.934 8116.382 - 8166.794: 90.9240% ( 40) 00:07:40.934 8166.794 - 8217.206: 91.1101% ( 33) 00:07:40.934 8217.206 - 8267.618: 91.5952% ( 86) 00:07:40.934 8267.618 - 8318.031: 91.8547% ( 46) 00:07:40.934 8318.031 - 8368.443: 92.1255% ( 48) 00:07:40.934 8368.443 - 8418.855: 92.3003% ( 31) 00:07:40.934 8418.855 - 8469.268: 92.5824% ( 50) 00:07:40.934 8469.268 - 8519.680: 92.7685% ( 33) 00:07:40.934 8519.680 - 8570.092: 92.9434% ( 31) 00:07:40.934 8570.092 - 8620.505: 93.0505% ( 19) 00:07:40.934 8620.505 - 8670.917: 93.1408% ( 16) 00:07:40.934 8670.917 - 8721.329: 93.2310% ( 16) 00:07:40.934 8721.329 - 8771.742: 93.3551% ( 22) 00:07:40.934 8771.742 - 8822.154: 93.4736% ( 21) 00:07:40.934 8822.154 - 8872.566: 93.6033% ( 23) 00:07:40.934 8872.566 - 8922.978: 93.7951% ( 34) 00:07:40.934 8922.978 - 8973.391: 93.9305% ( 24) 00:07:40.934 8973.391 - 9023.803: 94.0095% ( 14) 00:07:40.934 9023.803 - 9074.215: 94.0941% ( 15) 00:07:40.934 9074.215 - 9124.628: 94.1618% ( 12) 00:07:40.934 9124.628 - 9175.040: 94.3254% ( 29) 00:07:40.934 9175.040 - 9225.452: 94.5115% ( 33) 00:07:40.934 9225.452 - 9275.865: 94.5848% ( 13) 00:07:40.934 9275.865 - 9326.277: 94.6412% ( 10) 00:07:40.934 9326.277 - 9376.689: 94.7315% ( 16) 00:07:40.934 9376.689 - 9427.102: 94.8161% ( 15) 00:07:40.934 9427.102 - 9477.514: 95.0643% ( 44) 00:07:40.934 9477.514 - 9527.926: 95.1997% ( 24) 00:07:40.934 9527.926 - 9578.338: 95.3520% ( 27) 00:07:40.934 9578.338 - 9628.751: 95.4704% ( 21) 00:07:40.934 9628.751 - 9679.163: 95.5776% ( 19) 00:07:40.934 9679.163 - 9729.575: 95.7920% ( 38) 00:07:40.934 9729.575 - 9779.988: 95.9104% ( 21) 00:07:40.934 9779.988 - 9830.400: 96.0514% ( 25) 00:07:40.934 9830.400 - 9880.812: 96.1699% ( 21) 00:07:40.934 9880.812 - 9931.225: 96.3278% ( 28) 00:07:40.934 9931.225 - 9981.637: 96.4914% ( 29) 00:07:40.934 9981.637 - 10032.049: 96.6494% ( 28) 00:07:40.934 10032.049 - 10082.462: 96.7678% ( 21) 00:07:40.934 10082.462 - 10132.874: 96.9088% ( 25) 00:07:40.934 10132.874 - 10183.286: 97.0104% ( 18) 00:07:40.934 10183.286 - 10233.698: 97.0837% ( 13) 00:07:40.934 10233.698 - 10284.111: 97.1683% ( 15) 00:07:40.934 10284.111 - 10334.523: 97.3263% ( 28) 00:07:40.934 10334.523 - 10384.935: 97.3827% ( 10) 00:07:40.934 10384.935 - 10435.348: 97.4165% ( 6) 00:07:40.934 10435.348 - 10485.760: 97.4616% ( 8) 00:07:40.934 10485.760 - 10536.172: 97.5181% ( 10) 00:07:40.934 10536.172 - 10586.585: 97.5801% ( 11) 00:07:40.934 10586.585 - 10636.997: 97.6421% ( 11) 00:07:40.934 10636.997 - 10687.409: 97.7268% ( 15) 00:07:40.934 10687.409 - 10737.822: 97.8114% ( 15) 00:07:40.934 10737.822 - 10788.234: 97.8903% ( 14) 00:07:40.934 10788.234 - 10838.646: 98.0596% ( 30) 00:07:40.934 10838.646 - 10889.058: 98.1329% ( 13) 00:07:40.934 10889.058 - 10939.471: 98.2739% ( 25) 00:07:40.934 10939.471 - 10989.883: 98.3642% ( 16) 00:07:40.934 10989.883 - 11040.295: 98.4601% ( 17) 00:07:40.934 11040.295 - 11090.708: 98.5672% ( 19) 00:07:40.934 11090.708 - 11141.120: 98.6688% ( 18) 00:07:40.934 11141.120 - 11191.532: 98.7647% ( 17) 00:07:40.934 11191.532 - 11241.945: 98.8662% ( 18) 00:07:40.934 11241.945 - 11292.357: 98.9170% ( 9) 00:07:40.934 11292.357 - 11342.769: 98.9959% ( 14) 00:07:40.934 11342.769 - 11393.182: 99.0636% ( 12) 00:07:40.934 11393.182 - 11443.594: 99.1144% ( 9) 00:07:40.934 11443.594 - 11494.006: 99.1652% ( 9) 00:07:40.934 11494.006 - 11544.418: 99.1764% ( 2) 00:07:40.934 11544.418 - 11594.831: 99.1877% ( 2) 00:07:40.934 11594.831 - 11645.243: 99.1934% ( 1) 00:07:40.934 11645.243 - 11695.655: 99.1990% ( 1) 00:07:40.934 11695.655 - 11746.068: 99.2103% ( 2) 00:07:40.934 11746.068 - 11796.480: 99.2216% ( 2) 00:07:40.934 11796.480 - 11846.892: 99.2272% ( 1) 00:07:40.934 11846.892 - 11897.305: 99.2385% ( 2) 00:07:40.934 11897.305 - 11947.717: 99.2441% ( 1) 00:07:40.934 11947.717 - 11998.129: 99.2554% ( 2) 00:07:40.934 11998.129 - 12048.542: 99.2667% ( 2) 00:07:40.934 12048.542 - 12098.954: 99.2723% ( 1) 00:07:40.934 12098.954 - 12149.366: 99.2780% ( 1) 00:07:40.934 16938.535 - 17039.360: 99.2836% ( 1) 00:07:40.934 17039.360 - 17140.185: 99.3005% ( 3) 00:07:40.934 17140.185 - 17241.009: 99.3287% ( 5) 00:07:40.934 17241.009 - 17341.834: 99.3457% ( 3) 00:07:40.934 17341.834 - 17442.658: 99.3739% ( 5) 00:07:40.934 17442.658 - 17543.483: 99.3964% ( 4) 00:07:40.934 17543.483 - 17644.308: 99.4190% ( 4) 00:07:40.934 17644.308 - 17745.132: 99.4416% ( 4) 00:07:40.935 17745.132 - 17845.957: 99.4641% ( 4) 00:07:40.935 17845.957 - 17946.782: 99.4867% ( 4) 00:07:40.935 17946.782 - 18047.606: 99.5149% ( 5) 00:07:40.935 18047.606 - 18148.431: 99.5375% ( 4) 00:07:40.935 18148.431 - 18249.255: 99.5600% ( 4) 00:07:40.935 18249.255 - 18350.080: 99.5826% ( 4) 00:07:40.935 18350.080 - 18450.905: 99.6051% ( 4) 00:07:40.935 18450.905 - 18551.729: 99.6333% ( 5) 00:07:40.935 18551.729 - 18652.554: 99.6390% ( 1) 00:07:40.935 23895.434 - 23996.258: 99.6616% ( 4) 00:07:40.935 23996.258 - 24097.083: 99.6841% ( 4) 00:07:40.935 24097.083 - 24197.908: 99.7067% ( 4) 00:07:40.935 24197.908 - 24298.732: 99.7292% ( 4) 00:07:40.935 24298.732 - 24399.557: 99.7518% ( 4) 00:07:40.935 24399.557 - 24500.382: 99.7744% ( 4) 00:07:40.935 24500.382 - 24601.206: 99.7969% ( 4) 00:07:40.935 24601.206 - 24702.031: 99.8251% ( 5) 00:07:40.935 24702.031 - 24802.855: 99.8477% ( 4) 00:07:40.935 24802.855 - 24903.680: 99.8703% ( 4) 00:07:40.935 24903.680 - 25004.505: 99.8928% ( 4) 00:07:40.935 25004.505 - 25105.329: 99.9210% ( 5) 00:07:40.935 25105.329 - 25206.154: 99.9436% ( 4) 00:07:40.935 25206.154 - 25306.978: 99.9662% ( 4) 00:07:40.935 25306.978 - 25407.803: 99.9887% ( 4) 00:07:40.935 25407.803 - 25508.628: 100.0000% ( 2) 00:07:40.935 00:07:40.935 03:58:34 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:40.935 00:07:40.935 real 0m2.507s 00:07:40.935 user 0m2.187s 00:07:40.935 sys 0m0.214s 00:07:40.935 03:58:34 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.935 ************************************ 00:07:40.935 END TEST nvme_perf 00:07:40.935 ************************************ 00:07:40.935 03:58:34 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:40.935 03:58:34 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:40.935 03:58:34 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:40.935 03:58:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.935 03:58:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.192 ************************************ 00:07:41.192 START TEST nvme_hello_world 00:07:41.192 ************************************ 00:07:41.192 03:58:34 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:41.192 Initializing NVMe Controllers 00:07:41.192 Attached to 0000:00:13.0 00:07:41.192 Namespace ID: 1 size: 1GB 00:07:41.192 Attached to 0000:00:10.0 00:07:41.192 Namespace ID: 1 size: 6GB 00:07:41.192 Attached to 0000:00:11.0 00:07:41.192 Namespace ID: 1 size: 5GB 00:07:41.192 Attached to 0000:00:12.0 00:07:41.192 Namespace ID: 1 size: 4GB 00:07:41.192 Namespace ID: 2 size: 4GB 00:07:41.192 Namespace ID: 3 size: 4GB 00:07:41.193 Initialization complete. 00:07:41.193 INFO: using host memory buffer for IO 00:07:41.193 Hello world! 00:07:41.193 INFO: using host memory buffer for IO 00:07:41.193 Hello world! 00:07:41.193 INFO: using host memory buffer for IO 00:07:41.193 Hello world! 00:07:41.193 INFO: using host memory buffer for IO 00:07:41.193 Hello world! 00:07:41.193 INFO: using host memory buffer for IO 00:07:41.193 Hello world! 00:07:41.193 INFO: using host memory buffer for IO 00:07:41.193 Hello world! 00:07:41.193 00:07:41.193 real 0m0.194s 00:07:41.193 user 0m0.067s 00:07:41.193 sys 0m0.093s 00:07:41.193 03:58:34 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.193 03:58:34 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.193 ************************************ 00:07:41.193 END TEST nvme_hello_world 00:07:41.193 ************************************ 00:07:41.193 03:58:34 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:41.193 03:58:34 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.193 03:58:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.193 03:58:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.193 ************************************ 00:07:41.193 START TEST nvme_sgl 00:07:41.193 ************************************ 00:07:41.193 03:58:34 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:41.451 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:41.451 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:41.451 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:41.451 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:41.451 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:41.451 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:41.451 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:41.451 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:41.451 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:41.451 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:41.451 NVMe Readv/Writev Request test 00:07:41.451 Attached to 0000:00:13.0 00:07:41.451 Attached to 0000:00:10.0 00:07:41.451 Attached to 0000:00:11.0 00:07:41.451 Attached to 0000:00:12.0 00:07:41.451 0000:00:10.0: build_io_request_2 test passed 00:07:41.451 0000:00:10.0: build_io_request_4 test passed 00:07:41.451 0000:00:10.0: build_io_request_5 test passed 00:07:41.451 0000:00:10.0: build_io_request_6 test passed 00:07:41.451 0000:00:10.0: build_io_request_7 test passed 00:07:41.451 0000:00:10.0: build_io_request_10 test passed 00:07:41.451 0000:00:11.0: build_io_request_2 test passed 00:07:41.451 0000:00:11.0: build_io_request_4 test passed 00:07:41.451 0000:00:11.0: build_io_request_5 test passed 00:07:41.451 0000:00:11.0: build_io_request_6 test passed 00:07:41.451 0000:00:11.0: build_io_request_7 test passed 00:07:41.451 0000:00:11.0: build_io_request_10 test passed 00:07:41.451 Cleaning up... 00:07:41.451 00:07:41.451 real 0m0.270s 00:07:41.451 user 0m0.132s 00:07:41.451 sys 0m0.096s 00:07:41.451 03:58:34 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.451 03:58:34 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:41.451 ************************************ 00:07:41.451 END TEST nvme_sgl 00:07:41.451 ************************************ 00:07:41.710 03:58:34 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:41.710 03:58:34 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.710 03:58:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.710 03:58:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.710 ************************************ 00:07:41.710 START TEST nvme_e2edp 00:07:41.710 ************************************ 00:07:41.710 03:58:34 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:41.710 NVMe Write/Read with End-to-End data protection test 00:07:41.710 Attached to 0000:00:13.0 00:07:41.710 Attached to 0000:00:10.0 00:07:41.710 Attached to 0000:00:11.0 00:07:41.710 Attached to 0000:00:12.0 00:07:41.710 Cleaning up... 00:07:41.710 00:07:41.710 real 0m0.194s 00:07:41.710 user 0m0.056s 00:07:41.710 sys 0m0.095s 00:07:41.710 03:58:34 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.710 ************************************ 00:07:41.710 03:58:34 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:41.710 END TEST nvme_e2edp 00:07:41.710 ************************************ 00:07:41.710 03:58:34 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.710 03:58:34 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.710 03:58:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.710 03:58:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.710 ************************************ 00:07:41.710 START TEST nvme_reserve 00:07:41.710 ************************************ 00:07:41.710 03:58:34 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:41.968 ===================================================== 00:07:41.968 NVMe Controller at PCI bus 0, device 19, function 0 00:07:41.968 ===================================================== 00:07:41.968 Reservations: Not Supported 00:07:41.968 ===================================================== 00:07:41.968 NVMe Controller at PCI bus 0, device 16, function 0 00:07:41.968 ===================================================== 00:07:41.968 Reservations: Not Supported 00:07:41.968 ===================================================== 00:07:41.968 NVMe Controller at PCI bus 0, device 17, function 0 00:07:41.968 ===================================================== 00:07:41.968 Reservations: Not Supported 00:07:41.968 ===================================================== 00:07:41.968 NVMe Controller at PCI bus 0, device 18, function 0 00:07:41.968 ===================================================== 00:07:41.968 Reservations: Not Supported 00:07:41.968 Reservation test passed 00:07:41.968 00:07:41.968 real 0m0.174s 00:07:41.968 user 0m0.056s 00:07:41.968 sys 0m0.086s 00:07:41.968 03:58:35 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.968 03:58:35 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:41.968 ************************************ 00:07:41.968 END TEST nvme_reserve 00:07:41.968 ************************************ 00:07:41.968 03:58:35 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:41.968 03:58:35 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.968 03:58:35 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.968 03:58:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.968 ************************************ 00:07:41.968 START TEST nvme_err_injection 00:07:41.968 ************************************ 00:07:41.968 03:58:35 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:42.225 NVMe Error Injection test 00:07:42.225 Attached to 0000:00:13.0 00:07:42.225 Attached to 0000:00:10.0 00:07:42.225 Attached to 0000:00:11.0 00:07:42.225 Attached to 0000:00:12.0 00:07:42.225 0000:00:11.0: get features failed as expected 00:07:42.225 0000:00:12.0: get features failed as expected 00:07:42.225 0000:00:13.0: get features failed as expected 00:07:42.225 0000:00:10.0: get features failed as expected 00:07:42.225 0000:00:13.0: get features successfully as expected 00:07:42.225 0000:00:10.0: get features successfully as expected 00:07:42.225 0000:00:11.0: get features successfully as expected 00:07:42.225 0000:00:12.0: get features successfully as expected 00:07:42.225 0000:00:13.0: read failed as expected 00:07:42.225 0000:00:10.0: read failed as expected 00:07:42.225 0000:00:11.0: read failed as expected 00:07:42.225 0000:00:12.0: read failed as expected 00:07:42.225 0000:00:13.0: read successfully as expected 00:07:42.225 0000:00:10.0: read successfully as expected 00:07:42.225 0000:00:11.0: read successfully as expected 00:07:42.225 0000:00:12.0: read successfully as expected 00:07:42.225 Cleaning up... 00:07:42.225 00:07:42.225 real 0m0.184s 00:07:42.225 user 0m0.065s 00:07:42.225 sys 0m0.089s 00:07:42.225 03:58:35 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.225 ************************************ 00:07:42.225 END TEST nvme_err_injection 00:07:42.225 ************************************ 00:07:42.225 03:58:35 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:42.225 03:58:35 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:42.225 03:58:35 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:42.225 03:58:35 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.225 03:58:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.225 ************************************ 00:07:42.225 START TEST nvme_overhead 00:07:42.225 ************************************ 00:07:42.225 03:58:35 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:43.648 Initializing NVMe Controllers 00:07:43.648 Attached to 0000:00:13.0 00:07:43.648 Attached to 0000:00:10.0 00:07:43.648 Attached to 0000:00:11.0 00:07:43.648 Attached to 0000:00:12.0 00:07:43.648 Initialization complete. Launching workers. 00:07:43.648 submit (in ns) avg, min, max = 11497.4, 10016.2, 285536.2 00:07:43.648 complete (in ns) avg, min, max = 7737.6, 7211.5, 58783.1 00:07:43.648 00:07:43.648 Submit histogram 00:07:43.648 ================ 00:07:43.648 Range in us Cumulative Count 00:07:43.648 9.994 - 10.043: 0.0069% ( 1) 00:07:43.648 10.585 - 10.634: 0.0138% ( 1) 00:07:43.648 10.782 - 10.831: 0.0551% ( 6) 00:07:43.648 10.831 - 10.880: 0.3303% ( 40) 00:07:43.648 10.880 - 10.929: 1.3900% ( 154) 00:07:43.648 10.929 - 10.978: 4.0531% ( 387) 00:07:43.648 10.978 - 11.028: 10.0261% ( 868) 00:07:43.648 11.028 - 11.077: 19.7839% ( 1418) 00:07:43.648 11.077 - 11.126: 31.8607% ( 1755) 00:07:43.648 11.126 - 11.175: 44.3160% ( 1810) 00:07:43.648 11.175 - 11.225: 55.2918% ( 1595) 00:07:43.648 11.225 - 11.274: 63.9623% ( 1260) 00:07:43.648 11.274 - 11.323: 70.4927% ( 949) 00:07:43.648 11.323 - 11.372: 75.3372% ( 704) 00:07:43.648 11.372 - 11.422: 78.6402% ( 480) 00:07:43.648 11.422 - 11.471: 81.0281% ( 347) 00:07:43.648 11.471 - 11.520: 82.9273% ( 276) 00:07:43.648 11.520 - 11.569: 84.2692% ( 195) 00:07:43.648 11.569 - 11.618: 85.2670% ( 145) 00:07:43.648 11.618 - 11.668: 86.2166% ( 138) 00:07:43.648 11.668 - 11.717: 86.9736% ( 110) 00:07:43.648 11.717 - 11.766: 87.5860% ( 89) 00:07:43.648 11.766 - 11.815: 88.1916% ( 88) 00:07:43.648 11.815 - 11.865: 88.9004% ( 103) 00:07:43.648 11.865 - 11.914: 89.7192% ( 119) 00:07:43.648 11.914 - 11.963: 90.6689% ( 138) 00:07:43.648 11.963 - 12.012: 91.5634% ( 130) 00:07:43.648 12.012 - 12.062: 92.2653% ( 102) 00:07:43.648 12.062 - 12.111: 93.1599% ( 130) 00:07:43.648 12.111 - 12.160: 93.8481% ( 100) 00:07:43.648 12.160 - 12.209: 94.4261% ( 84) 00:07:43.648 12.209 - 12.258: 94.7702% ( 50) 00:07:43.648 12.258 - 12.308: 95.1418% ( 54) 00:07:43.648 12.308 - 12.357: 95.4377% ( 43) 00:07:43.648 12.357 - 12.406: 95.6235% ( 27) 00:07:43.648 12.406 - 12.455: 95.7680% ( 21) 00:07:43.648 12.455 - 12.505: 95.8712% ( 15) 00:07:43.648 12.505 - 12.554: 95.9400% ( 10) 00:07:43.648 12.554 - 12.603: 96.0226% ( 12) 00:07:43.648 12.603 - 12.702: 96.1051% ( 12) 00:07:43.648 12.702 - 12.800: 96.1808% ( 11) 00:07:43.648 12.800 - 12.898: 96.2359% ( 8) 00:07:43.648 12.898 - 12.997: 96.3116% ( 11) 00:07:43.648 12.997 - 13.095: 96.4630% ( 22) 00:07:43.648 13.095 - 13.194: 96.5937% ( 19) 00:07:43.648 13.194 - 13.292: 96.7038% ( 16) 00:07:43.648 13.292 - 13.391: 96.7933% ( 13) 00:07:43.648 13.391 - 13.489: 96.9516% ( 23) 00:07:43.648 13.489 - 13.588: 97.0204% ( 10) 00:07:43.648 13.588 - 13.686: 97.1098% ( 13) 00:07:43.648 13.686 - 13.785: 97.1718% ( 9) 00:07:43.648 13.785 - 13.883: 97.2268% ( 8) 00:07:43.648 13.883 - 13.982: 97.2543% ( 4) 00:07:43.648 13.982 - 14.080: 97.2887% ( 5) 00:07:43.648 14.080 - 14.178: 97.3507% ( 9) 00:07:43.648 14.178 - 14.277: 97.3644% ( 2) 00:07:43.648 14.277 - 14.375: 97.3988% ( 5) 00:07:43.648 14.375 - 14.474: 97.4264% ( 4) 00:07:43.648 14.474 - 14.572: 97.4814% ( 8) 00:07:43.648 14.572 - 14.671: 97.5296% ( 7) 00:07:43.648 14.671 - 14.769: 97.5984% ( 10) 00:07:43.648 14.769 - 14.868: 97.6947% ( 14) 00:07:43.648 14.868 - 14.966: 97.7498% ( 8) 00:07:43.648 14.966 - 15.065: 97.7842% ( 5) 00:07:43.648 15.065 - 15.163: 97.8186% ( 5) 00:07:43.648 15.163 - 15.262: 97.8737% ( 8) 00:07:43.648 15.262 - 15.360: 97.9425% ( 10) 00:07:43.648 15.360 - 15.458: 97.9838% ( 6) 00:07:43.648 15.458 - 15.557: 98.0250% ( 6) 00:07:43.648 15.557 - 15.655: 98.0801% ( 8) 00:07:43.648 15.655 - 15.754: 98.1145% ( 5) 00:07:43.648 15.754 - 15.852: 98.1627% ( 7) 00:07:43.648 15.852 - 15.951: 98.1764% ( 2) 00:07:43.648 15.951 - 16.049: 98.2177% ( 6) 00:07:43.648 16.049 - 16.148: 98.2521% ( 5) 00:07:43.648 16.148 - 16.246: 98.2934% ( 6) 00:07:43.648 16.246 - 16.345: 98.3485% ( 8) 00:07:43.648 16.345 - 16.443: 98.3829% ( 5) 00:07:43.648 16.443 - 16.542: 98.3966% ( 2) 00:07:43.648 16.542 - 16.640: 98.4242% ( 4) 00:07:43.648 16.640 - 16.738: 98.5205% ( 14) 00:07:43.648 16.738 - 16.837: 98.6168% ( 14) 00:07:43.648 16.837 - 16.935: 98.7338% ( 17) 00:07:43.648 16.935 - 17.034: 98.8026% ( 10) 00:07:43.648 17.034 - 17.132: 98.9472% ( 21) 00:07:43.648 17.132 - 17.231: 99.0573% ( 16) 00:07:43.648 17.231 - 17.329: 99.1398% ( 12) 00:07:43.648 17.329 - 17.428: 99.1949% ( 8) 00:07:43.648 17.428 - 17.526: 99.2637% ( 10) 00:07:43.648 17.526 - 17.625: 99.3187% ( 8) 00:07:43.648 17.625 - 17.723: 99.3600% ( 6) 00:07:43.648 17.723 - 17.822: 99.3807% ( 3) 00:07:43.648 17.822 - 17.920: 99.4013% ( 3) 00:07:43.648 17.920 - 18.018: 99.4288% ( 4) 00:07:43.648 18.018 - 18.117: 99.4426% ( 2) 00:07:43.648 18.117 - 18.215: 99.4839% ( 6) 00:07:43.648 18.215 - 18.314: 99.5114% ( 4) 00:07:43.648 18.314 - 18.412: 99.5596% ( 7) 00:07:43.648 18.412 - 18.511: 99.5665% ( 1) 00:07:43.648 18.511 - 18.609: 99.5734% ( 1) 00:07:43.648 18.609 - 18.708: 99.5802% ( 1) 00:07:43.648 18.708 - 18.806: 99.6009% ( 3) 00:07:43.648 18.905 - 19.003: 99.6078% ( 1) 00:07:43.648 19.003 - 19.102: 99.6146% ( 1) 00:07:43.648 19.102 - 19.200: 99.6353% ( 3) 00:07:43.648 19.200 - 19.298: 99.6491% ( 2) 00:07:43.649 19.298 - 19.397: 99.6559% ( 1) 00:07:43.649 19.397 - 19.495: 99.6697% ( 2) 00:07:43.649 19.495 - 19.594: 99.6835% ( 2) 00:07:43.649 19.692 - 19.791: 99.6903% ( 1) 00:07:43.649 19.791 - 19.889: 99.7041% ( 2) 00:07:43.649 19.988 - 20.086: 99.7110% ( 1) 00:07:43.649 20.086 - 20.185: 99.7179% ( 1) 00:07:43.649 20.185 - 20.283: 99.7316% ( 2) 00:07:43.649 20.283 - 20.382: 99.7385% ( 1) 00:07:43.649 20.480 - 20.578: 99.7454% ( 1) 00:07:43.649 20.874 - 20.972: 99.7523% ( 1) 00:07:43.649 21.169 - 21.268: 99.7592% ( 1) 00:07:43.649 21.366 - 21.465: 99.7660% ( 1) 00:07:43.649 21.563 - 21.662: 99.7729% ( 1) 00:07:43.649 21.662 - 21.760: 99.7798% ( 1) 00:07:43.649 21.760 - 21.858: 99.7867% ( 1) 00:07:43.649 21.858 - 21.957: 99.7936% ( 1) 00:07:43.649 22.055 - 22.154: 99.8004% ( 1) 00:07:43.649 22.351 - 22.449: 99.8073% ( 1) 00:07:43.649 22.449 - 22.548: 99.8142% ( 1) 00:07:43.649 22.646 - 22.745: 99.8211% ( 1) 00:07:43.649 22.942 - 23.040: 99.8280% ( 1) 00:07:43.649 23.138 - 23.237: 99.8348% ( 1) 00:07:43.649 23.335 - 23.434: 99.8417% ( 1) 00:07:43.649 23.828 - 23.926: 99.8486% ( 1) 00:07:43.649 24.025 - 24.123: 99.8624% ( 2) 00:07:43.649 24.123 - 24.222: 99.8693% ( 1) 00:07:43.649 24.517 - 24.615: 99.8761% ( 1) 00:07:43.649 24.615 - 24.714: 99.8830% ( 1) 00:07:43.649 24.911 - 25.009: 99.8899% ( 1) 00:07:43.649 26.585 - 26.782: 99.8968% ( 1) 00:07:43.649 28.357 - 28.554: 99.9037% ( 1) 00:07:43.649 33.871 - 34.068: 99.9105% ( 1) 00:07:43.649 34.855 - 35.052: 99.9174% ( 1) 00:07:43.649 35.840 - 36.037: 99.9243% ( 1) 00:07:43.649 37.415 - 37.612: 99.9312% ( 1) 00:07:43.649 40.763 - 40.960: 99.9381% ( 1) 00:07:43.649 42.338 - 42.535: 99.9449% ( 1) 00:07:43.649 43.520 - 43.717: 99.9518% ( 1) 00:07:43.649 43.914 - 44.111: 99.9587% ( 1) 00:07:43.649 47.655 - 47.852: 99.9656% ( 1) 00:07:43.649 48.837 - 49.034: 99.9725% ( 1) 00:07:43.649 53.563 - 53.957: 99.9794% ( 1) 00:07:43.649 54.351 - 54.745: 99.9862% ( 1) 00:07:43.649 68.529 - 68.923: 99.9931% ( 1) 00:07:43.649 285.145 - 286.720: 100.0000% ( 1) 00:07:43.649 00:07:43.649 Complete histogram 00:07:43.649 ================== 00:07:43.649 Range in us Cumulative Count 00:07:43.649 7.188 - 7.237: 0.0138% ( 2) 00:07:43.649 7.237 - 7.286: 0.1720% ( 23) 00:07:43.649 7.286 - 7.335: 1.4244% ( 182) 00:07:43.649 7.335 - 7.385: 5.8492% ( 643) 00:07:43.649 7.385 - 7.434: 16.3501% ( 1526) 00:07:43.649 7.434 - 7.483: 33.3402% ( 2469) 00:07:43.649 7.483 - 7.532: 52.4773% ( 2781) 00:07:43.649 7.532 - 7.582: 68.2356% ( 2290) 00:07:43.649 7.582 - 7.631: 78.0897% ( 1432) 00:07:43.649 7.631 - 7.680: 83.2301% ( 747) 00:07:43.649 7.680 - 7.729: 85.5285% ( 334) 00:07:43.649 7.729 - 7.778: 86.8566% ( 193) 00:07:43.649 7.778 - 7.828: 87.4277% ( 83) 00:07:43.649 7.828 - 7.877: 87.6617% ( 34) 00:07:43.649 7.877 - 7.926: 87.9507% ( 42) 00:07:43.649 7.926 - 7.975: 88.4944% ( 79) 00:07:43.649 7.975 - 8.025: 89.6435% ( 167) 00:07:43.649 8.025 - 8.074: 91.0680% ( 207) 00:07:43.649 8.074 - 8.123: 92.6025% ( 223) 00:07:43.649 8.123 - 8.172: 93.9375% ( 194) 00:07:43.649 8.172 - 8.222: 95.0592% ( 163) 00:07:43.649 8.222 - 8.271: 95.7955% ( 107) 00:07:43.649 8.271 - 8.320: 96.3735% ( 84) 00:07:43.649 8.320 - 8.369: 96.7726% ( 58) 00:07:43.649 8.369 - 8.418: 97.0066% ( 34) 00:07:43.649 8.418 - 8.468: 97.1718% ( 24) 00:07:43.649 8.468 - 8.517: 97.3094% ( 20) 00:07:43.649 8.517 - 8.566: 97.3782% ( 10) 00:07:43.649 8.566 - 8.615: 97.3988% ( 3) 00:07:43.649 8.615 - 8.665: 97.4195% ( 3) 00:07:43.649 8.665 - 8.714: 97.4401% ( 3) 00:07:43.649 8.714 - 8.763: 97.4539% ( 2) 00:07:43.649 8.763 - 8.812: 97.4608% ( 1) 00:07:43.649 9.009 - 9.058: 97.4745% ( 2) 00:07:43.649 9.157 - 9.206: 97.4883% ( 2) 00:07:43.649 9.206 - 9.255: 97.4952% ( 1) 00:07:43.649 9.305 - 9.354: 97.5021% ( 1) 00:07:43.649 9.452 - 9.502: 97.5089% ( 1) 00:07:43.649 9.502 - 9.551: 97.5158% ( 1) 00:07:43.649 9.551 - 9.600: 97.5296% ( 2) 00:07:43.649 9.698 - 9.748: 97.5502% ( 3) 00:07:43.649 9.748 - 9.797: 97.5640% ( 2) 00:07:43.649 9.945 - 9.994: 97.5709% ( 1) 00:07:43.649 9.994 - 10.043: 97.5778% ( 1) 00:07:43.649 10.043 - 10.092: 97.5846% ( 1) 00:07:43.649 10.092 - 10.142: 97.5984% ( 2) 00:07:43.649 10.142 - 10.191: 97.6328% ( 5) 00:07:43.649 10.191 - 10.240: 97.6466% ( 2) 00:07:43.649 10.240 - 10.289: 97.6603% ( 2) 00:07:43.649 10.289 - 10.338: 97.6810% ( 3) 00:07:43.649 10.338 - 10.388: 97.7016% ( 3) 00:07:43.649 10.388 - 10.437: 97.7085% ( 1) 00:07:43.649 10.437 - 10.486: 97.7291% ( 3) 00:07:43.649 10.486 - 10.535: 97.7704% ( 6) 00:07:43.649 10.535 - 10.585: 97.7842% ( 2) 00:07:43.649 10.585 - 10.634: 97.8255% ( 6) 00:07:43.649 10.683 - 10.732: 97.8393% ( 2) 00:07:43.649 10.732 - 10.782: 97.8737% ( 5) 00:07:43.649 10.782 - 10.831: 97.9012% ( 4) 00:07:43.649 10.831 - 10.880: 97.9494% ( 7) 00:07:43.649 10.880 - 10.929: 97.9906% ( 6) 00:07:43.649 10.929 - 10.978: 98.0182% ( 4) 00:07:43.649 10.978 - 11.028: 98.0801% ( 9) 00:07:43.649 11.028 - 11.077: 98.1145% ( 5) 00:07:43.649 11.077 - 11.126: 98.1214% ( 1) 00:07:43.649 11.126 - 11.175: 98.1352% ( 2) 00:07:43.649 11.175 - 11.225: 98.1489% ( 2) 00:07:43.649 11.225 - 11.274: 98.1627% ( 2) 00:07:43.649 11.274 - 11.323: 98.1696% ( 1) 00:07:43.649 11.372 - 11.422: 98.1902% ( 3) 00:07:43.649 11.422 - 11.471: 98.2040% ( 2) 00:07:43.649 11.471 - 11.520: 98.2108% ( 1) 00:07:43.649 11.520 - 11.569: 98.2246% ( 2) 00:07:43.649 11.569 - 11.618: 98.2453% ( 3) 00:07:43.649 11.618 - 11.668: 98.2521% ( 1) 00:07:43.649 11.668 - 11.717: 98.2590% ( 1) 00:07:43.649 11.815 - 11.865: 98.2728% ( 2) 00:07:43.649 11.865 - 11.914: 98.2797% ( 1) 00:07:43.649 11.963 - 12.012: 98.2934% ( 2) 00:07:43.649 12.062 - 12.111: 98.3072% ( 2) 00:07:43.649 12.111 - 12.160: 98.3209% ( 2) 00:07:43.649 12.702 - 12.800: 98.3347% ( 2) 00:07:43.649 12.800 - 12.898: 98.3554% ( 3) 00:07:43.649 12.898 - 12.997: 98.3691% ( 2) 00:07:43.649 12.997 - 13.095: 98.4242% ( 8) 00:07:43.649 13.095 - 13.194: 98.4861% ( 9) 00:07:43.649 13.194 - 13.292: 98.6031% ( 17) 00:07:43.649 13.292 - 13.391: 98.6994% ( 14) 00:07:43.649 13.391 - 13.489: 98.8233% ( 18) 00:07:43.649 13.489 - 13.588: 98.9540% ( 19) 00:07:43.649 13.588 - 13.686: 99.0779% ( 18) 00:07:43.649 13.686 - 13.785: 99.1605% ( 12) 00:07:43.649 13.785 - 13.883: 99.2430% ( 12) 00:07:43.649 13.883 - 13.982: 99.3050% ( 9) 00:07:43.649 13.982 - 14.080: 99.3669% ( 9) 00:07:43.649 14.080 - 14.178: 99.4288% ( 9) 00:07:43.649 14.178 - 14.277: 99.5045% ( 11) 00:07:43.649 14.277 - 14.375: 99.5252% ( 3) 00:07:43.649 14.375 - 14.474: 99.5527% ( 4) 00:07:43.649 14.474 - 14.572: 99.5734% ( 3) 00:07:43.649 14.572 - 14.671: 99.5802% ( 1) 00:07:43.649 14.671 - 14.769: 99.5940% ( 2) 00:07:43.649 14.769 - 14.868: 99.6078% ( 2) 00:07:43.649 14.868 - 14.966: 99.6491% ( 6) 00:07:43.649 15.065 - 15.163: 99.6559% ( 1) 00:07:43.649 15.262 - 15.360: 99.6697% ( 2) 00:07:43.649 15.458 - 15.557: 99.6766% ( 1) 00:07:43.649 15.754 - 15.852: 99.6835% ( 1) 00:07:43.649 15.852 - 15.951: 99.6903% ( 1) 00:07:43.649 16.049 - 16.148: 99.6972% ( 1) 00:07:43.649 16.246 - 16.345: 99.7110% ( 2) 00:07:43.649 16.345 - 16.443: 99.7247% ( 2) 00:07:43.649 16.640 - 16.738: 99.7316% ( 1) 00:07:43.649 16.837 - 16.935: 99.7454% ( 2) 00:07:43.649 16.935 - 17.034: 99.7523% ( 1) 00:07:43.649 17.132 - 17.231: 99.7592% ( 1) 00:07:43.649 17.428 - 17.526: 99.7798% ( 3) 00:07:43.649 17.526 - 17.625: 99.7936% ( 2) 00:07:43.649 17.625 - 17.723: 99.8004% ( 1) 00:07:43.649 17.920 - 18.018: 99.8073% ( 1) 00:07:43.649 18.412 - 18.511: 99.8142% ( 1) 00:07:43.649 18.511 - 18.609: 99.8280% ( 2) 00:07:43.649 18.806 - 18.905: 99.8417% ( 2) 00:07:43.649 18.905 - 19.003: 99.8555% ( 2) 00:07:43.649 19.102 - 19.200: 99.8624% ( 1) 00:07:43.649 19.397 - 19.495: 99.8693% ( 1) 00:07:43.649 19.791 - 19.889: 99.8830% ( 2) 00:07:43.649 19.988 - 20.086: 99.8899% ( 1) 00:07:43.649 20.972 - 21.071: 99.9037% ( 2) 00:07:43.649 21.858 - 21.957: 99.9105% ( 1) 00:07:43.649 22.252 - 22.351: 99.9174% ( 1) 00:07:43.649 22.548 - 22.646: 99.9243% ( 1) 00:07:43.649 22.745 - 22.843: 99.9312% ( 1) 00:07:43.649 23.335 - 23.434: 99.9381% ( 1) 00:07:43.649 24.123 - 24.222: 99.9449% ( 1) 00:07:43.649 25.403 - 25.600: 99.9518% ( 1) 00:07:43.649 25.797 - 25.994: 99.9587% ( 1) 00:07:43.649 33.477 - 33.674: 99.9656% ( 1) 00:07:43.649 36.825 - 37.022: 99.9725% ( 1) 00:07:43.649 47.065 - 47.262: 99.9794% ( 1) 00:07:43.649 47.458 - 47.655: 99.9862% ( 1) 00:07:43.649 57.895 - 58.289: 99.9931% ( 1) 00:07:43.649 58.683 - 59.077: 100.0000% ( 1) 00:07:43.649 00:07:43.649 00:07:43.649 real 0m1.201s 00:07:43.649 user 0m1.053s 00:07:43.649 sys 0m0.101s 00:07:43.649 03:58:36 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.649 03:58:36 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:43.649 ************************************ 00:07:43.649 END TEST nvme_overhead 00:07:43.649 ************************************ 00:07:43.650 03:58:36 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:43.650 03:58:36 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:43.650 03:58:36 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.650 03:58:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.650 ************************************ 00:07:43.650 START TEST nvme_arbitration 00:07:43.650 ************************************ 00:07:43.650 03:58:36 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:46.939 Initializing NVMe Controllers 00:07:46.939 Attached to 0000:00:13.0 00:07:46.939 Attached to 0000:00:10.0 00:07:46.939 Attached to 0000:00:11.0 00:07:46.939 Attached to 0000:00:12.0 00:07:46.939 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:46.939 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:46.939 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:46.939 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:46.939 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:46.939 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:46.939 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:46.939 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:46.939 Initialization complete. Launching workers. 00:07:46.939 Starting thread on core 1 with urgent priority queue 00:07:46.939 Starting thread on core 2 with urgent priority queue 00:07:46.939 Starting thread on core 3 with urgent priority queue 00:07:46.939 Starting thread on core 0 with urgent priority queue 00:07:46.939 QEMU NVMe Ctrl (12343 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:46.939 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:07:46.939 QEMU NVMe Ctrl (12340 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:46.939 QEMU NVMe Ctrl (12342 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:07:46.939 QEMU NVMe Ctrl (12341 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:07:46.939 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:07:46.939 ======================================================== 00:07:46.939 00:07:46.939 00:07:46.939 real 0m3.283s 00:07:46.939 user 0m9.220s 00:07:46.940 sys 0m0.115s 00:07:46.940 ************************************ 00:07:46.940 END TEST nvme_arbitration 00:07:46.940 03:58:39 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.940 03:58:39 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:46.940 ************************************ 00:07:46.940 03:58:39 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.940 03:58:39 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:46.940 03:58:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.940 03:58:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.940 ************************************ 00:07:46.940 START TEST nvme_single_aen 00:07:46.940 ************************************ 00:07:46.940 03:58:39 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:46.940 Asynchronous Event Request test 00:07:46.940 Attached to 0000:00:13.0 00:07:46.940 Attached to 0000:00:10.0 00:07:46.940 Attached to 0000:00:11.0 00:07:46.940 Attached to 0000:00:12.0 00:07:46.940 Reset controller to setup AER completions for this process 00:07:46.940 Registering asynchronous event callbacks... 00:07:46.940 Getting orig temperature thresholds of all controllers 00:07:46.940 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.940 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.940 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.940 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:46.940 Setting all controllers temperature threshold low to trigger AER 00:07:46.940 Waiting for all controllers temperature threshold to be set lower 00:07:46.940 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.940 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:46.940 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.940 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:46.940 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.940 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:46.940 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:46.940 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:46.940 Waiting for all controllers to trigger AER and reset threshold 00:07:46.940 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.940 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.940 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.940 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:46.940 Cleaning up... 00:07:47.198 00:07:47.198 real 0m0.238s 00:07:47.198 user 0m0.083s 00:07:47.198 sys 0m0.108s 00:07:47.198 03:58:40 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.198 ************************************ 00:07:47.198 END TEST nvme_single_aen 00:07:47.198 ************************************ 00:07:47.198 03:58:40 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:47.198 03:58:40 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:47.198 03:58:40 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.198 03:58:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.198 03:58:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.198 ************************************ 00:07:47.198 START TEST nvme_doorbell_aers 00:07:47.198 ************************************ 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:47.198 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.199 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:47.199 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:47.199 03:58:40 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:47.199 03:58:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:47.199 03:58:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:47.459 [2024-10-13 03:58:40.455360] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:07:57.454 Executing: test_write_invalid_db 00:07:57.454 Waiting for AER completion... 00:07:57.454 Failure: test_write_invalid_db 00:07:57.454 00:07:57.454 Executing: test_invalid_db_write_overflow_sq 00:07:57.454 Waiting for AER completion... 00:07:57.454 Failure: test_invalid_db_write_overflow_sq 00:07:57.454 00:07:57.454 Executing: test_invalid_db_write_overflow_cq 00:07:57.454 Waiting for AER completion... 00:07:57.454 Failure: test_invalid_db_write_overflow_cq 00:07:57.454 00:07:57.454 03:58:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:57.454 03:58:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:57.454 [2024-10-13 03:58:50.468701] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:07.430 Executing: test_write_invalid_db 00:08:07.430 Waiting for AER completion... 00:08:07.430 Failure: test_write_invalid_db 00:08:07.430 00:08:07.430 Executing: test_invalid_db_write_overflow_sq 00:08:07.430 Waiting for AER completion... 00:08:07.430 Failure: test_invalid_db_write_overflow_sq 00:08:07.430 00:08:07.430 Executing: test_invalid_db_write_overflow_cq 00:08:07.430 Waiting for AER completion... 00:08:07.430 Failure: test_invalid_db_write_overflow_cq 00:08:07.430 00:08:07.430 03:59:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:07.430 03:59:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:07.430 [2024-10-13 03:59:00.476191] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:17.413 Executing: test_write_invalid_db 00:08:17.413 Waiting for AER completion... 00:08:17.413 Failure: test_write_invalid_db 00:08:17.413 00:08:17.413 Executing: test_invalid_db_write_overflow_sq 00:08:17.413 Waiting for AER completion... 00:08:17.413 Failure: test_invalid_db_write_overflow_sq 00:08:17.413 00:08:17.413 Executing: test_invalid_db_write_overflow_cq 00:08:17.413 Waiting for AER completion... 00:08:17.413 Failure: test_invalid_db_write_overflow_cq 00:08:17.413 00:08:17.413 03:59:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.413 03:59:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:17.413 [2024-10-13 03:59:10.526771] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.416 Executing: test_write_invalid_db 00:08:27.416 Waiting for AER completion... 00:08:27.416 Failure: test_write_invalid_db 00:08:27.416 00:08:27.416 Executing: test_invalid_db_write_overflow_sq 00:08:27.416 Waiting for AER completion... 00:08:27.416 Failure: test_invalid_db_write_overflow_sq 00:08:27.416 00:08:27.416 Executing: test_invalid_db_write_overflow_cq 00:08:27.416 Waiting for AER completion... 00:08:27.416 Failure: test_invalid_db_write_overflow_cq 00:08:27.416 00:08:27.416 00:08:27.416 real 0m40.197s 00:08:27.416 user 0m34.055s 00:08:27.416 sys 0m5.744s 00:08:27.416 03:59:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.416 03:59:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:27.416 ************************************ 00:08:27.416 END TEST nvme_doorbell_aers 00:08:27.416 ************************************ 00:08:27.416 03:59:20 nvme -- nvme/nvme.sh@97 -- # uname 00:08:27.416 03:59:20 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:27.416 03:59:20 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:27.416 03:59:20 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:27.416 03:59:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.416 03:59:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.416 ************************************ 00:08:27.416 START TEST nvme_multi_aen 00:08:27.416 ************************************ 00:08:27.416 03:59:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:27.674 [2024-10-13 03:59:20.586513] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.586574] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.586584] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.588033] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.588073] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.588084] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.589100] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.589126] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.589134] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.590140] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.590164] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 [2024-10-13 03:59:20.590171] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63541) is not found. Dropping the request. 00:08:27.674 Child process pid: 64067 00:08:27.674 [Child] Asynchronous Event Request test 00:08:27.674 [Child] Attached to 0000:00:13.0 00:08:27.674 [Child] Attached to 0000:00:10.0 00:08:27.674 [Child] Attached to 0000:00:11.0 00:08:27.674 [Child] Attached to 0000:00:12.0 00:08:27.674 [Child] Registering asynchronous event callbacks... 00:08:27.674 [Child] Getting orig temperature thresholds of all controllers 00:08:27.674 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.674 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.674 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.674 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.674 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:27.675 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 [Child] Cleaning up... 00:08:27.675 Asynchronous Event Request test 00:08:27.675 Attached to 0000:00:13.0 00:08:27.675 Attached to 0000:00:10.0 00:08:27.675 Attached to 0000:00:11.0 00:08:27.675 Attached to 0000:00:12.0 00:08:27.675 Reset controller to setup AER completions for this process 00:08:27.675 Registering asynchronous event callbacks... 00:08:27.675 Getting orig temperature thresholds of all controllers 00:08:27.675 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.675 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.675 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.675 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:27.675 Setting all controllers temperature threshold low to trigger AER 00:08:27.675 Waiting for all controllers temperature threshold to be set lower 00:08:27.675 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:27.675 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:27.675 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:27.675 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:27.675 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:27.675 Waiting for all controllers to trigger AER and reset threshold 00:08:27.675 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:27.675 Cleaning up... 00:08:27.675 00:08:27.675 real 0m0.411s 00:08:27.675 user 0m0.136s 00:08:27.675 sys 0m0.174s 00:08:27.675 03:59:20 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.675 03:59:20 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:27.675 ************************************ 00:08:27.675 END TEST nvme_multi_aen 00:08:27.675 ************************************ 00:08:27.932 03:59:20 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.932 03:59:20 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:27.932 03:59:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.932 03:59:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.932 ************************************ 00:08:27.932 START TEST nvme_startup 00:08:27.932 ************************************ 00:08:27.932 03:59:20 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:27.932 Initializing NVMe Controllers 00:08:27.932 Attached to 0000:00:13.0 00:08:27.932 Attached to 0000:00:10.0 00:08:27.932 Attached to 0000:00:11.0 00:08:27.932 Attached to 0000:00:12.0 00:08:27.932 Initialization complete. 00:08:27.932 Time used:138318.406 (us). 00:08:27.932 00:08:27.932 real 0m0.201s 00:08:27.932 user 0m0.062s 00:08:27.932 sys 0m0.095s 00:08:27.932 03:59:21 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.932 03:59:21 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:27.932 ************************************ 00:08:27.932 END TEST nvme_startup 00:08:27.932 ************************************ 00:08:27.932 03:59:21 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:27.932 03:59:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.932 03:59:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.932 03:59:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.932 ************************************ 00:08:27.932 START TEST nvme_multi_secondary 00:08:27.932 ************************************ 00:08:27.932 03:59:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:27.932 03:59:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64118 00:08:27.932 03:59:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64119 00:08:27.932 03:59:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:27.932 03:59:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:27.932 03:59:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:31.220 Initializing NVMe Controllers 00:08:31.220 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.220 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.220 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.220 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.220 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:31.220 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:31.220 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:31.220 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:31.220 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:31.220 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:31.220 Initialization complete. Launching workers. 00:08:31.220 ======================================================== 00:08:31.220 Latency(us) 00:08:31.220 Device Information : IOPS MiB/s Average min max 00:08:31.220 PCIE (0000:00:13.0) NSID 1 from core 1: 7881.28 30.79 2029.74 748.18 6103.29 00:08:31.220 PCIE (0000:00:10.0) NSID 1 from core 1: 7881.28 30.79 2028.88 730.42 6150.01 00:08:31.220 PCIE (0000:00:11.0) NSID 1 from core 1: 7881.28 30.79 2029.90 753.03 5950.27 00:08:31.220 PCIE (0000:00:12.0) NSID 1 from core 1: 7881.28 30.79 2030.05 739.72 6024.08 00:08:31.220 PCIE (0000:00:12.0) NSID 2 from core 1: 7881.28 30.79 2030.39 743.74 5828.15 00:08:31.220 PCIE (0000:00:12.0) NSID 3 from core 1: 7881.28 30.79 2030.47 739.47 6105.29 00:08:31.220 ======================================================== 00:08:31.220 Total : 47287.66 184.72 2029.90 730.42 6150.01 00:08:31.220 00:08:31.479 Initializing NVMe Controllers 00:08:31.479 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.479 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.479 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.479 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.479 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:31.479 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:31.479 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:31.479 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:31.479 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:31.479 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:31.479 Initialization complete. Launching workers. 00:08:31.479 ======================================================== 00:08:31.479 Latency(us) 00:08:31.479 Device Information : IOPS MiB/s Average min max 00:08:31.479 PCIE (0000:00:13.0) NSID 1 from core 2: 3189.12 12.46 5016.68 1124.19 12673.01 00:08:31.479 PCIE (0000:00:10.0) NSID 1 from core 2: 3189.12 12.46 5015.98 942.70 12718.62 00:08:31.479 PCIE (0000:00:11.0) NSID 1 from core 2: 3189.12 12.46 5016.72 1105.56 13345.92 00:08:31.479 PCIE (0000:00:12.0) NSID 1 from core 2: 3189.12 12.46 5016.63 1129.06 11608.22 00:08:31.479 PCIE (0000:00:12.0) NSID 2 from core 2: 3189.12 12.46 5016.18 1021.49 12506.23 00:08:31.479 PCIE (0000:00:12.0) NSID 3 from core 2: 3189.12 12.46 5016.16 912.76 12555.75 00:08:31.479 ======================================================== 00:08:31.479 Total : 19134.75 74.75 5016.39 912.76 13345.92 00:08:31.479 00:08:31.479 03:59:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64118 00:08:33.380 Initializing NVMe Controllers 00:08:33.380 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.380 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.380 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.380 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.380 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:33.380 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:33.380 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:33.380 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:33.380 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:33.380 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:33.380 Initialization complete. Launching workers. 00:08:33.380 ======================================================== 00:08:33.380 Latency(us) 00:08:33.380 Device Information : IOPS MiB/s Average min max 00:08:33.380 PCIE (0000:00:13.0) NSID 1 from core 0: 11141.44 43.52 1435.71 706.37 9244.17 00:08:33.380 PCIE (0000:00:10.0) NSID 1 from core 0: 11141.44 43.52 1434.85 683.56 8865.05 00:08:33.380 PCIE (0000:00:11.0) NSID 1 from core 0: 11141.44 43.52 1435.66 701.07 8874.33 00:08:33.380 PCIE (0000:00:12.0) NSID 1 from core 0: 11141.44 43.52 1435.65 667.13 9252.31 00:08:33.380 PCIE (0000:00:12.0) NSID 2 from core 0: 11141.44 43.52 1435.63 660.39 9246.23 00:08:33.380 PCIE (0000:00:12.0) NSID 3 from core 0: 11141.44 43.52 1435.60 627.28 9245.40 00:08:33.380 ======================================================== 00:08:33.380 Total : 66848.66 261.13 1435.52 627.28 9252.31 00:08:33.380 00:08:33.380 03:59:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64119 00:08:33.380 03:59:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64193 00:08:33.380 03:59:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64194 00:08:33.380 03:59:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:33.380 03:59:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:33.380 03:59:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:36.663 Initializing NVMe Controllers 00:08:36.663 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.663 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.663 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.663 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.663 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:36.663 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:36.663 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:36.663 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:36.663 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:36.663 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:36.663 Initialization complete. Launching workers. 00:08:36.663 ======================================================== 00:08:36.663 Latency(us) 00:08:36.663 Device Information : IOPS MiB/s Average min max 00:08:36.663 PCIE (0000:00:13.0) NSID 1 from core 1: 7270.81 28.40 2200.17 777.89 5619.39 00:08:36.663 PCIE (0000:00:10.0) NSID 1 from core 1: 7270.81 28.40 2199.25 752.59 5679.51 00:08:36.663 PCIE (0000:00:11.0) NSID 1 from core 1: 7270.81 28.40 2200.25 766.65 5443.79 00:08:36.663 PCIE (0000:00:12.0) NSID 1 from core 1: 7270.81 28.40 2200.31 771.14 5706.15 00:08:36.663 PCIE (0000:00:12.0) NSID 2 from core 1: 7270.81 28.40 2200.28 779.61 5491.27 00:08:36.663 PCIE (0000:00:12.0) NSID 3 from core 1: 7270.81 28.40 2200.33 774.28 5717.24 00:08:36.663 ======================================================== 00:08:36.663 Total : 43624.85 170.41 2200.10 752.59 5717.24 00:08:36.663 00:08:36.922 Initializing NVMe Controllers 00:08:36.922 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.922 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.922 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.922 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.922 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.922 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.922 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.922 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.922 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.922 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.922 Initialization complete. Launching workers. 00:08:36.922 ======================================================== 00:08:36.922 Latency(us) 00:08:36.922 Device Information : IOPS MiB/s Average min max 00:08:36.922 PCIE (0000:00:13.0) NSID 1 from core 0: 7391.95 28.87 2164.09 754.42 6476.05 00:08:36.922 PCIE (0000:00:10.0) NSID 1 from core 0: 7391.95 28.87 2163.26 754.38 6516.32 00:08:36.922 PCIE (0000:00:11.0) NSID 1 from core 0: 7391.95 28.87 2164.13 756.52 6534.91 00:08:36.922 PCIE (0000:00:12.0) NSID 1 from core 0: 7391.95 28.87 2164.11 768.79 6331.66 00:08:36.922 PCIE (0000:00:12.0) NSID 2 from core 0: 7391.95 28.87 2164.08 768.14 6524.07 00:08:36.922 PCIE (0000:00:12.0) NSID 3 from core 0: 7391.95 28.87 2164.04 762.47 6459.17 00:08:36.922 ======================================================== 00:08:36.922 Total : 44351.70 173.25 2163.95 754.38 6534.91 00:08:36.922 00:08:38.822 Initializing NVMe Controllers 00:08:38.822 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:38.822 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:38.822 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:38.822 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:38.822 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:38.822 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:38.822 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:38.822 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:38.822 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:38.822 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:38.822 Initialization complete. Launching workers. 00:08:38.822 ======================================================== 00:08:38.822 Latency(us) 00:08:38.822 Device Information : IOPS MiB/s Average min max 00:08:38.822 PCIE (0000:00:13.0) NSID 1 from core 2: 4430.76 17.31 3610.41 788.71 13746.36 00:08:38.822 PCIE (0000:00:10.0) NSID 1 from core 2: 4430.76 17.31 3608.69 773.09 13179.62 00:08:38.822 PCIE (0000:00:11.0) NSID 1 from core 2: 4430.76 17.31 3610.70 731.93 14929.46 00:08:38.822 PCIE (0000:00:12.0) NSID 1 from core 2: 4430.76 17.31 3610.46 788.64 15890.81 00:08:38.822 PCIE (0000:00:12.0) NSID 2 from core 2: 4430.76 17.31 3610.40 793.46 12800.79 00:08:38.822 PCIE (0000:00:12.0) NSID 3 from core 2: 4430.76 17.31 3610.34 790.53 14834.63 00:08:38.822 ======================================================== 00:08:38.822 Total : 26584.54 103.85 3610.17 731.93 15890.81 00:08:38.822 00:08:38.822 03:59:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64193 00:08:38.822 03:59:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64194 00:08:38.822 00:08:38.822 real 0m10.694s 00:08:38.822 user 0m18.336s 00:08:38.822 sys 0m0.676s 00:08:38.822 03:59:31 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.822 03:59:31 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:38.822 ************************************ 00:08:38.822 END TEST nvme_multi_secondary 00:08:38.822 ************************************ 00:08:38.822 03:59:31 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:38.822 03:59:31 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63156 ]] 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@1090 -- # kill 63156 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@1091 -- # wait 63156 00:08:38.822 [2024-10-13 03:59:31.813188] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.813255] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.813282] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.813300] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.815862] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.815916] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.815932] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.815948] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.818233] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.818279] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.818293] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.818308] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.820542] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.820595] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.820609] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 [2024-10-13 03:59:31.820638] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64066) is not found. Dropping the request. 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:38.822 03:59:31 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.822 03:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.822 ************************************ 00:08:38.822 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:38.822 ************************************ 00:08:38.822 03:59:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:39.081 * Looking for test storage... 00:08:39.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.081 --rc genhtml_branch_coverage=1 00:08:39.081 --rc genhtml_function_coverage=1 00:08:39.081 --rc genhtml_legend=1 00:08:39.081 --rc geninfo_all_blocks=1 00:08:39.081 --rc geninfo_unexecuted_blocks=1 00:08:39.081 00:08:39.081 ' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.081 --rc genhtml_branch_coverage=1 00:08:39.081 --rc genhtml_function_coverage=1 00:08:39.081 --rc genhtml_legend=1 00:08:39.081 --rc geninfo_all_blocks=1 00:08:39.081 --rc geninfo_unexecuted_blocks=1 00:08:39.081 00:08:39.081 ' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.081 --rc genhtml_branch_coverage=1 00:08:39.081 --rc genhtml_function_coverage=1 00:08:39.081 --rc genhtml_legend=1 00:08:39.081 --rc geninfo_all_blocks=1 00:08:39.081 --rc geninfo_unexecuted_blocks=1 00:08:39.081 00:08:39.081 ' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.081 --rc genhtml_branch_coverage=1 00:08:39.081 --rc genhtml_function_coverage=1 00:08:39.081 --rc genhtml_legend=1 00:08:39.081 --rc geninfo_all_blocks=1 00:08:39.081 --rc geninfo_unexecuted_blocks=1 00:08:39.081 00:08:39.081 ' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64355 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64355 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64355 ']' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:39.081 03:59:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:39.081 [2024-10-13 03:59:32.224597] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:08:39.081 [2024-10-13 03:59:32.224735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64355 ] 00:08:39.339 [2024-10-13 03:59:32.384354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.339 [2024-10-13 03:59:32.482315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.339 [2024-10-13 03:59:32.482827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.340 [2024-10-13 03:59:32.483254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.340 [2024-10-13 03:59:32.483324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:40.273 nvme0n1 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_7B2b1.txt 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:40.273 true 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728791973 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64378 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:40.273 03:59:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.173 [2024-10-13 03:59:35.160024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:42.173 [2024-10-13 03:59:35.160338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:42.173 [2024-10-13 03:59:35.160361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:42.173 [2024-10-13 03:59:35.160372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:42.173 [2024-10-13 03:59:35.161792] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.173 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64378 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64378 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64378 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_7B2b1.txt 00:08:42.173 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_7B2b1.txt 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64355 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64355 ']' 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64355 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64355 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.174 killing process with pid 64355 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64355' 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64355 00:08:42.174 03:59:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64355 00:08:43.547 03:59:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:43.547 03:59:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:43.547 00:08:43.547 real 0m4.485s 00:08:43.547 user 0m15.925s 00:08:43.547 sys 0m0.488s 00:08:43.547 03:59:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.547 03:59:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.547 ************************************ 00:08:43.547 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:43.547 ************************************ 00:08:43.547 03:59:36 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:43.547 03:59:36 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:43.547 03:59:36 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.547 03:59:36 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.547 03:59:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.547 ************************************ 00:08:43.547 START TEST nvme_fio 00:08:43.547 ************************************ 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:43.547 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.547 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:43.806 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.806 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:43.806 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.806 03:59:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:43.806 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:44.066 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:44.066 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:44.066 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:44.066 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:44.066 03:59:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:44.066 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:44.066 fio-3.35 00:08:44.066 Starting 1 thread 00:08:48.249 00:08:48.249 test: (groupid=0, jobs=1): err= 0: pid=64513: Sun Oct 13 03:59:40 2024 00:08:48.249 read: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(147MiB/2001msec) 00:08:48.249 slat (nsec): min=3375, max=79907, avg=5322.08, stdev=2625.19 00:08:48.249 clat (usec): min=242, max=9187, avg=3363.44, stdev=1243.35 00:08:48.249 lat (usec): min=246, max=9192, avg=3368.76, stdev=1244.51 00:08:48.249 clat percentiles (usec): 00:08:48.249 | 1.00th=[ 1991], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:48.249 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 3064], 00:08:48.249 | 70.00th=[ 3425], 80.00th=[ 4293], 90.00th=[ 5342], 95.00th=[ 6128], 00:08:48.249 | 99.00th=[ 7242], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 8717], 00:08:48.249 | 99.99th=[ 9110] 00:08:48.249 bw ( KiB/s): min=71416, max=84600, per=100.00%, avg=76325.33, stdev=7207.54, samples=3 00:08:48.249 iops : min=17854, max=21150, avg=19081.33, stdev=1801.88, samples=3 00:08:48.249 write: IOPS=18.9k, BW=73.8MiB/s (77.3MB/s)(148MiB/2001msec); 0 zone resets 00:08:48.249 slat (usec): min=3, max=169, avg= 5.55, stdev= 2.97 00:08:48.249 clat (usec): min=191, max=9643, avg=3390.83, stdev=1256.05 00:08:48.249 lat (usec): min=196, max=9647, avg=3396.38, stdev=1257.18 00:08:48.249 clat percentiles (usec): 00:08:48.249 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2474], 00:08:48.249 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3064], 00:08:48.249 | 70.00th=[ 3490], 80.00th=[ 4359], 90.00th=[ 5407], 95.00th=[ 6194], 00:08:48.249 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 8848], 00:08:48.249 | 99.99th=[ 8979] 00:08:48.249 bw ( KiB/s): min=71688, max=84792, per=100.00%, avg=76501.33, stdev=7210.94, samples=3 00:08:48.249 iops : min=17922, max=21198, avg=19125.33, stdev=1802.73, samples=3 00:08:48.249 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:08:48.249 lat (msec) : 2=0.92%, 4=75.69%, 10=23.33% 00:08:48.249 cpu : usr=98.40%, sys=0.25%, ctx=28, majf=0, minf=608 00:08:48.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:48.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.249 issued rwts: total=37752,37781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.249 00:08:48.249 Run status group 0 (all jobs): 00:08:48.249 READ: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=147MiB (155MB), run=2001-2001msec 00:08:48.249 WRITE: bw=73.8MiB/s (77.3MB/s), 73.8MiB/s-73.8MiB/s (77.3MB/s-77.3MB/s), io=148MiB (155MB), run=2001-2001msec 00:08:48.249 ----------------------------------------------------- 00:08:48.249 Suppressions used: 00:08:48.249 count bytes template 00:08:48.249 1 32 /usr/src/fio/parse.c 00:08:48.249 1 8 libtcmalloc_minimal.so 00:08:48.249 ----------------------------------------------------- 00:08:48.249 00:08:48.249 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:48.249 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:48.249 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:48.249 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:48.249 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:48.249 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:48.511 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:48.511 03:59:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:48.511 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:48.511 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:48.512 03:59:41 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:48.785 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:48.785 fio-3.35 00:08:48.785 Starting 1 thread 00:08:54.052 00:08:54.052 test: (groupid=0, jobs=1): err= 0: pid=64568: Sun Oct 13 03:59:47 2024 00:08:54.052 read: IOPS=19.0k, BW=74.3MiB/s (77.9MB/s)(149MiB/2001msec) 00:08:54.052 slat (nsec): min=3346, max=74887, avg=5485.95, stdev=2887.79 00:08:54.052 clat (usec): min=246, max=12500, avg=3340.78, stdev=1238.44 00:08:54.052 lat (usec): min=251, max=12556, avg=3346.27, stdev=1239.79 00:08:54.052 clat percentiles (usec): 00:08:54.052 | 1.00th=[ 1860], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:08:54.052 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 3097], 00:08:54.052 | 70.00th=[ 3425], 80.00th=[ 4293], 90.00th=[ 5342], 95.00th=[ 5997], 00:08:54.052 | 99.00th=[ 7046], 99.50th=[ 7635], 99.90th=[ 8848], 99.95th=[ 9372], 00:08:54.052 | 99.99th=[12387] 00:08:54.052 bw ( KiB/s): min=69784, max=83888, per=100.00%, avg=76179.33, stdev=7143.13, samples=3 00:08:54.052 iops : min=17446, max=20972, avg=19044.67, stdev=1785.83, samples=3 00:08:54.052 write: IOPS=19.0k, BW=74.3MiB/s (77.9MB/s)(149MiB/2001msec); 0 zone resets 00:08:54.052 slat (nsec): min=3456, max=80476, avg=5700.79, stdev=2922.35 00:08:54.052 clat (usec): min=261, max=12429, avg=3357.13, stdev=1236.22 00:08:54.052 lat (usec): min=266, max=12443, avg=3362.83, stdev=1237.57 00:08:54.052 clat percentiles (usec): 00:08:54.052 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2474], 00:08:54.052 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 3097], 00:08:54.052 | 70.00th=[ 3458], 80.00th=[ 4359], 90.00th=[ 5342], 95.00th=[ 5997], 00:08:54.052 | 99.00th=[ 7046], 99.50th=[ 7635], 99.90th=[ 8979], 99.95th=[10290], 00:08:54.052 | 99.99th=[11994] 00:08:54.052 bw ( KiB/s): min=69920, max=84368, per=100.00%, avg=76339.33, stdev=7357.22, samples=3 00:08:54.052 iops : min=17480, max=21092, avg=19084.67, stdev=1839.36, samples=3 00:08:54.052 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.04% 00:08:54.052 lat (msec) : 2=1.47%, 4=75.33%, 10=23.07%, 20=0.05% 00:08:54.052 cpu : usr=98.90%, sys=0.15%, ctx=4, majf=0, minf=607 00:08:54.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:54.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.052 issued rwts: total=38081,38066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.052 00:08:54.052 Run status group 0 (all jobs): 00:08:54.052 READ: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=149MiB (156MB), run=2001-2001msec 00:08:54.052 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=149MiB (156MB), run=2001-2001msec 00:08:54.311 ----------------------------------------------------- 00:08:54.311 Suppressions used: 00:08:54.311 count bytes template 00:08:54.311 1 32 /usr/src/fio/parse.c 00:08:54.311 1 8 libtcmalloc_minimal.so 00:08:54.311 ----------------------------------------------------- 00:08:54.311 00:08:54.311 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:54.311 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:54.311 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:54.311 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:54.569 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:54.569 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:54.569 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.569 03:59:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.569 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:54.827 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:54.827 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.827 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.827 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:54.827 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.827 03:59:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:54.827 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:54.827 fio-3.35 00:08:54.827 Starting 1 thread 00:09:00.102 00:09:00.102 test: (groupid=0, jobs=1): err= 0: pid=64629: Sun Oct 13 03:59:52 2024 00:09:00.103 read: IOPS=16.0k, BW=62.3MiB/s (65.3MB/s)(125MiB/2001msec) 00:09:00.103 slat (nsec): min=3387, max=74684, avg=6048.04, stdev=3453.57 00:09:00.103 clat (usec): min=249, max=8981, avg=3984.49, stdev=1362.78 00:09:00.103 lat (usec): min=254, max=8986, avg=3990.53, stdev=1364.12 00:09:00.103 clat percentiles (usec): 00:09:00.103 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2737], 00:09:00.103 | 30.00th=[ 2933], 40.00th=[ 3228], 50.00th=[ 3621], 60.00th=[ 4228], 00:09:00.103 | 70.00th=[ 4752], 80.00th=[ 5211], 90.00th=[ 5932], 95.00th=[ 6456], 00:09:00.103 | 99.00th=[ 7570], 99.50th=[ 7832], 99.90th=[ 8455], 99.95th=[ 8717], 00:09:00.103 | 99.99th=[ 8848] 00:09:00.103 bw ( KiB/s): min=58248, max=71232, per=100.00%, avg=64357.33, stdev=6525.75, samples=3 00:09:00.103 iops : min=14562, max=17808, avg=16089.33, stdev=1631.44, samples=3 00:09:00.103 write: IOPS=16.0k, BW=62.4MiB/s (65.5MB/s)(125MiB/2001msec); 0 zone resets 00:09:00.103 slat (nsec): min=3452, max=73405, avg=6259.60, stdev=3443.54 00:09:00.103 clat (usec): min=259, max=9100, avg=4000.48, stdev=1359.10 00:09:00.103 lat (usec): min=264, max=9107, avg=4006.74, stdev=1360.42 00:09:00.103 clat percentiles (usec): 00:09:00.103 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2737], 00:09:00.103 | 30.00th=[ 2966], 40.00th=[ 3228], 50.00th=[ 3621], 60.00th=[ 4228], 00:09:00.103 | 70.00th=[ 4752], 80.00th=[ 5276], 90.00th=[ 5997], 95.00th=[ 6456], 00:09:00.103 | 99.00th=[ 7570], 99.50th=[ 7832], 99.90th=[ 8291], 99.95th=[ 8586], 00:09:00.103 | 99.99th=[ 8848] 00:09:00.103 bw ( KiB/s): min=57568, max=70808, per=100.00%, avg=64082.67, stdev=6622.51, samples=3 00:09:00.103 iops : min=14392, max=17702, avg=16020.67, stdev=1655.63, samples=3 00:09:00.103 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01% 00:09:00.103 lat (msec) : 2=0.51%, 4=55.79%, 10=43.68% 00:09:00.103 cpu : usr=98.70%, sys=0.10%, ctx=2, majf=0, minf=607 00:09:00.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:00.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.103 issued rwts: total=31923,31984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.103 00:09:00.103 Run status group 0 (all jobs): 00:09:00.103 READ: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:00.103 WRITE: bw=62.4MiB/s (65.5MB/s), 62.4MiB/s-62.4MiB/s (65.5MB/s-65.5MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:00.103 ----------------------------------------------------- 00:09:00.103 Suppressions used: 00:09:00.103 count bytes template 00:09:00.103 1 32 /usr/src/fio/parse.c 00:09:00.103 1 8 libtcmalloc_minimal.so 00:09:00.103 ----------------------------------------------------- 00:09:00.103 00:09:00.103 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:00.103 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.103 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.103 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.103 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.103 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.360 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.360 03:59:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.360 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.360 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.361 03:59:53 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:00.619 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.619 fio-3.35 00:09:00.619 Starting 1 thread 00:09:10.586 00:09:10.586 test: (groupid=0, jobs=1): err= 0: pid=64690: Sun Oct 13 04:00:02 2024 00:09:10.586 read: IOPS=23.8k, BW=93.0MiB/s (97.5MB/s)(186MiB/2001msec) 00:09:10.586 slat (nsec): min=4225, max=79115, avg=4896.14, stdev=2017.99 00:09:10.586 clat (usec): min=198, max=8422, avg=2684.56, stdev=749.64 00:09:10.586 lat (usec): min=203, max=8481, avg=2689.46, stdev=750.86 00:09:10.586 clat percentiles (usec): 00:09:10.586 | 1.00th=[ 1909], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2376], 00:09:10.586 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:10.586 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3097], 95.00th=[ 4424], 00:09:10.586 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7570], 99.95th=[ 7832], 00:09:10.586 | 99.99th=[ 8291] 00:09:10.586 bw ( KiB/s): min=94088, max=98320, per=100.00%, avg=95682.67, stdev=2300.61, samples=3 00:09:10.586 iops : min=23522, max=24580, avg=23920.67, stdev=575.15, samples=3 00:09:10.586 write: IOPS=23.6k, BW=92.4MiB/s (96.9MB/s)(185MiB/2001msec); 0 zone resets 00:09:10.586 slat (nsec): min=4282, max=79210, avg=5180.12, stdev=2005.01 00:09:10.586 clat (usec): min=239, max=8347, avg=2688.01, stdev=753.81 00:09:10.586 lat (usec): min=244, max=8362, avg=2693.19, stdev=755.02 00:09:10.586 clat percentiles (usec): 00:09:10.586 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2376], 00:09:10.586 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:10.586 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3097], 95.00th=[ 4424], 00:09:10.586 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7767], 00:09:10.586 | 99.99th=[ 8029] 00:09:10.586 bw ( KiB/s): min=93936, max=97688, per=100.00%, avg=95757.33, stdev=1878.39, samples=3 00:09:10.586 iops : min=23484, max=24422, avg=23939.33, stdev=469.60, samples=3 00:09:10.586 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:10.586 lat (msec) : 2=1.31%, 4=92.62%, 10=6.03% 00:09:10.586 cpu : usr=99.35%, sys=0.00%, ctx=5, majf=0, minf=606 00:09:10.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:10.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.586 issued rwts: total=47627,47318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.586 00:09:10.586 Run status group 0 (all jobs): 00:09:10.586 READ: bw=93.0MiB/s (97.5MB/s), 93.0MiB/s-93.0MiB/s (97.5MB/s-97.5MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:10.586 WRITE: bw=92.4MiB/s (96.9MB/s), 92.4MiB/s-92.4MiB/s (96.9MB/s-96.9MB/s), io=185MiB (194MB), run=2001-2001msec 00:09:10.586 ----------------------------------------------------- 00:09:10.586 Suppressions used: 00:09:10.586 count bytes template 00:09:10.586 1 32 /usr/src/fio/parse.c 00:09:10.586 1 8 libtcmalloc_minimal.so 00:09:10.586 ----------------------------------------------------- 00:09:10.586 00:09:10.586 04:00:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:10.586 04:00:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:10.586 00:09:10.586 real 0m26.643s 00:09:10.586 user 0m15.824s 00:09:10.586 sys 0m19.388s 00:09:10.586 ************************************ 00:09:10.586 END TEST nvme_fio 00:09:10.586 ************************************ 00:09:10.586 04:00:03 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.586 04:00:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:10.586 00:09:10.586 real 1m35.204s 00:09:10.586 user 3m34.768s 00:09:10.586 sys 0m29.927s 00:09:10.586 04:00:03 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.586 ************************************ 00:09:10.586 END TEST nvme 00:09:10.586 ************************************ 00:09:10.586 04:00:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.586 04:00:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:10.586 04:00:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:10.586 04:00:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.586 04:00:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.586 04:00:03 -- common/autotest_common.sh@10 -- # set +x 00:09:10.586 ************************************ 00:09:10.586 START TEST nvme_scc 00:09:10.586 ************************************ 00:09:10.586 04:00:03 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:10.586 * Looking for test storage... 00:09:10.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:10.586 04:00:03 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.586 04:00:03 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.586 04:00:03 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.586 04:00:03 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.586 04:00:03 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:10.587 04:00:03 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.587 04:00:03 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.587 --rc genhtml_branch_coverage=1 00:09:10.587 --rc genhtml_function_coverage=1 00:09:10.587 --rc genhtml_legend=1 00:09:10.587 --rc geninfo_all_blocks=1 00:09:10.587 --rc geninfo_unexecuted_blocks=1 00:09:10.587 00:09:10.587 ' 00:09:10.587 04:00:03 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.587 --rc genhtml_branch_coverage=1 00:09:10.587 --rc genhtml_function_coverage=1 00:09:10.587 --rc genhtml_legend=1 00:09:10.587 --rc geninfo_all_blocks=1 00:09:10.587 --rc geninfo_unexecuted_blocks=1 00:09:10.587 00:09:10.587 ' 00:09:10.587 04:00:03 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.587 --rc genhtml_branch_coverage=1 00:09:10.587 --rc genhtml_function_coverage=1 00:09:10.587 --rc genhtml_legend=1 00:09:10.587 --rc geninfo_all_blocks=1 00:09:10.587 --rc geninfo_unexecuted_blocks=1 00:09:10.587 00:09:10.587 ' 00:09:10.587 04:00:03 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.587 --rc genhtml_branch_coverage=1 00:09:10.587 --rc genhtml_function_coverage=1 00:09:10.587 --rc genhtml_legend=1 00:09:10.587 --rc geninfo_all_blocks=1 00:09:10.587 --rc geninfo_unexecuted_blocks=1 00:09:10.587 00:09:10.587 ' 00:09:10.587 04:00:03 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.587 04:00:03 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.587 04:00:03 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.587 04:00:03 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.587 04:00:03 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.587 04:00:03 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:10.587 04:00:03 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:10.587 04:00:03 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:10.587 04:00:03 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.587 04:00:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:10.587 04:00:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:10.587 04:00:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:10.587 04:00:03 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:10.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:10.845 Waiting for block devices as requested 00:09:10.846 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.846 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.846 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:10.846 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.131 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:16.131 04:00:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:16.131 04:00:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.131 04:00:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:16.131 04:00:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.131 04:00:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:16.131 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:16.132 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.133 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.134 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:16.135 04:00:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.135 04:00:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:16.135 04:00:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.135 04:00:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:16.135 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.136 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.137 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:16.138 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:16.139 04:00:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.139 04:00:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:16.139 04:00:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.139 04:00:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.139 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.140 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.141 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.142 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.143 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.144 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.145 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.146 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:16.147 04:00:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:16.147 04:00:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:16.147 04:00:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:16.147 04:00:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.147 04:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:16.406 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:16.407 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.408 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:16.409 04:00:09 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:16.410 04:00:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:16.410 04:00:09 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:16.410 04:00:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:16.410 04:00:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:16.410 04:00:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:16.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:17.234 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.234 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.234 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.234 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.234 04:00:10 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:17.234 04:00:10 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:17.234 04:00:10 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.234 04:00:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:17.234 ************************************ 00:09:17.234 START TEST nvme_simple_copy 00:09:17.234 ************************************ 00:09:17.234 04:00:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:17.492 Initializing NVMe Controllers 00:09:17.493 Attaching to 0000:00:10.0 00:09:17.493 Controller supports SCC. Attached to 0000:00:10.0 00:09:17.493 Namespace ID: 1 size: 6GB 00:09:17.493 Initialization complete. 00:09:17.493 00:09:17.493 Controller QEMU NVMe Ctrl (12340 ) 00:09:17.493 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:17.493 Namespace Block Size:4096 00:09:17.493 Writing LBAs 0 to 63 with Random Data 00:09:17.493 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:17.493 LBAs matching Written Data: 64 00:09:17.493 00:09:17.493 real 0m0.247s 00:09:17.493 user 0m0.084s 00:09:17.493 sys 0m0.062s 00:09:17.493 04:00:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.493 ************************************ 00:09:17.493 END TEST nvme_simple_copy 00:09:17.493 04:00:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:17.493 ************************************ 00:09:17.493 ************************************ 00:09:17.493 END TEST nvme_scc 00:09:17.493 ************************************ 00:09:17.493 00:09:17.493 real 0m7.432s 00:09:17.493 user 0m1.015s 00:09:17.493 sys 0m1.313s 00:09:17.493 04:00:10 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.493 04:00:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:17.751 04:00:10 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:17.751 04:00:10 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:17.751 04:00:10 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:17.751 04:00:10 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:17.751 04:00:10 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:17.751 04:00:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:17.751 04:00:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.751 04:00:10 -- common/autotest_common.sh@10 -- # set +x 00:09:17.751 ************************************ 00:09:17.751 START TEST nvme_fdp 00:09:17.751 ************************************ 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:17.751 * Looking for test storage... 00:09:17.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.751 --rc genhtml_branch_coverage=1 00:09:17.751 --rc genhtml_function_coverage=1 00:09:17.751 --rc genhtml_legend=1 00:09:17.751 --rc geninfo_all_blocks=1 00:09:17.751 --rc geninfo_unexecuted_blocks=1 00:09:17.751 00:09:17.751 ' 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.751 --rc genhtml_branch_coverage=1 00:09:17.751 --rc genhtml_function_coverage=1 00:09:17.751 --rc genhtml_legend=1 00:09:17.751 --rc geninfo_all_blocks=1 00:09:17.751 --rc geninfo_unexecuted_blocks=1 00:09:17.751 00:09:17.751 ' 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.751 --rc genhtml_branch_coverage=1 00:09:17.751 --rc genhtml_function_coverage=1 00:09:17.751 --rc genhtml_legend=1 00:09:17.751 --rc geninfo_all_blocks=1 00:09:17.751 --rc geninfo_unexecuted_blocks=1 00:09:17.751 00:09:17.751 ' 00:09:17.751 04:00:10 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:17.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.751 --rc genhtml_branch_coverage=1 00:09:17.751 --rc genhtml_function_coverage=1 00:09:17.751 --rc genhtml_legend=1 00:09:17.751 --rc geninfo_all_blocks=1 00:09:17.751 --rc geninfo_unexecuted_blocks=1 00:09:17.751 00:09:17.751 ' 00:09:17.751 04:00:10 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.751 04:00:10 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.751 04:00:10 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.751 04:00:10 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.751 04:00:10 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.751 04:00:10 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:17.751 04:00:10 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:17.751 04:00:10 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:17.751 04:00:10 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:17.751 04:00:10 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.264 Waiting for block devices as requested 00:09:18.264 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.264 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.545 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.816 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:23.816 04:00:16 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:23.816 04:00:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.816 04:00:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:23.816 04:00:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.816 04:00:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.816 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:23.817 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.818 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.819 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.820 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:23.821 04:00:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.821 04:00:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:23.821 04:00:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.821 04:00:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.821 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:23.822 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.823 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.824 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:23.825 04:00:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.825 04:00:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:23.825 04:00:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.825 04:00:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:23.825 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.826 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.827 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:23.828 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:23.829 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.830 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:23.831 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:23.832 04:00:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:23.832 04:00:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:23.832 04:00:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:23.832 04:00:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:23.832 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.833 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:23.834 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:23.835 04:00:16 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:23.835 04:00:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:23.836 04:00:16 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:23.836 04:00:16 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:23.836 04:00:16 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:23.836 04:00:16 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:24.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.704 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.704 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.704 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.704 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:24.704 04:00:17 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:24.704 04:00:17 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:24.704 04:00:17 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.704 04:00:17 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:24.704 ************************************ 00:09:24.704 START TEST nvme_flexible_data_placement 00:09:24.704 ************************************ 00:09:24.704 04:00:17 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:24.986 Initializing NVMe Controllers 00:09:24.986 Attaching to 0000:00:13.0 00:09:24.986 Controller supports FDP Attached to 0000:00:13.0 00:09:24.986 Namespace ID: 1 Endurance Group ID: 1 00:09:24.986 Initialization complete. 00:09:24.986 00:09:24.986 ================================== 00:09:24.986 == FDP tests for Namespace: #01 == 00:09:24.986 ================================== 00:09:24.986 00:09:24.986 Get Feature: FDP: 00:09:24.986 ================= 00:09:24.986 Enabled: Yes 00:09:24.986 FDP configuration Index: 0 00:09:24.986 00:09:24.986 FDP configurations log page 00:09:24.986 =========================== 00:09:24.986 Number of FDP configurations: 1 00:09:24.986 Version: 0 00:09:24.986 Size: 112 00:09:24.986 FDP Configuration Descriptor: 0 00:09:24.986 Descriptor Size: 96 00:09:24.986 Reclaim Group Identifier format: 2 00:09:24.986 FDP Volatile Write Cache: Not Present 00:09:24.986 FDP Configuration: Valid 00:09:24.986 Vendor Specific Size: 0 00:09:24.986 Number of Reclaim Groups: 2 00:09:24.986 Number of Recalim Unit Handles: 8 00:09:24.986 Max Placement Identifiers: 128 00:09:24.986 Number of Namespaces Suppprted: 256 00:09:24.986 Reclaim unit Nominal Size: 6000000 bytes 00:09:24.986 Estimated Reclaim Unit Time Limit: Not Reported 00:09:24.986 RUH Desc #000: RUH Type: Initially Isolated 00:09:24.986 RUH Desc #001: RUH Type: Initially Isolated 00:09:24.986 RUH Desc #002: RUH Type: Initially Isolated 00:09:24.986 RUH Desc #003: RUH Type: Initially Isolated 00:09:24.986 RUH Desc #004: RUH Type: Initially Isolated 00:09:24.986 RUH Desc #005: RUH Type: Initially Isolated 00:09:24.986 RUH Desc #006: RUH Type: Initially Isolated 00:09:24.986 RUH Desc #007: RUH Type: Initially Isolated 00:09:24.986 00:09:24.986 FDP reclaim unit handle usage log page 00:09:24.986 ====================================== 00:09:24.986 Number of Reclaim Unit Handles: 8 00:09:24.986 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:24.986 RUH Usage Desc #001: RUH Attributes: Unused 00:09:24.986 RUH Usage Desc #002: RUH Attributes: Unused 00:09:24.986 RUH Usage Desc #003: RUH Attributes: Unused 00:09:24.986 RUH Usage Desc #004: RUH Attributes: Unused 00:09:24.986 RUH Usage Desc #005: RUH Attributes: Unused 00:09:24.986 RUH Usage Desc #006: RUH Attributes: Unused 00:09:24.986 RUH Usage Desc #007: RUH Attributes: Unused 00:09:24.986 00:09:24.986 FDP statistics log page 00:09:24.986 ======================= 00:09:24.986 Host bytes with metadata written: 1076563968 00:09:24.986 Media bytes with metadata written: 1076666368 00:09:24.986 Media bytes erased: 0 00:09:24.986 00:09:24.986 FDP Reclaim unit handle status 00:09:24.986 ============================== 00:09:24.986 Number of RUHS descriptors: 2 00:09:24.986 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001d4f 00:09:24.986 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:24.986 00:09:24.986 FDP write on placement id: 0 success 00:09:24.986 00:09:24.986 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:24.986 00:09:24.986 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:24.986 00:09:24.986 Get Feature: FDP Events for Placement handle: #0 00:09:24.986 ======================== 00:09:24.986 Number of FDP Events: 6 00:09:24.986 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:24.986 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:24.986 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:24.986 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:24.986 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:24.986 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:24.986 00:09:24.986 FDP events log page 00:09:24.986 =================== 00:09:24.986 Number of FDP events: 1 00:09:24.986 FDP Event #0: 00:09:24.986 Event Type: RU Not Written to Capacity 00:09:24.986 Placement Identifier: Valid 00:09:24.986 NSID: Valid 00:09:24.986 Location: Valid 00:09:24.986 Placement Identifier: 0 00:09:24.986 Event Timestamp: 6 00:09:24.986 Namespace Identifier: 1 00:09:24.986 Reclaim Group Identifier: 0 00:09:24.986 Reclaim Unit Handle Identifier: 0 00:09:24.986 00:09:24.986 FDP test passed 00:09:24.986 00:09:24.986 real 0m0.222s 00:09:24.986 user 0m0.068s 00:09:24.986 sys 0m0.053s 00:09:24.986 04:00:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.986 04:00:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:24.986 ************************************ 00:09:24.986 END TEST nvme_flexible_data_placement 00:09:24.986 ************************************ 00:09:24.986 00:09:24.986 real 0m7.373s 00:09:24.986 user 0m0.969s 00:09:24.986 sys 0m1.296s 00:09:24.986 04:00:18 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.986 04:00:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:24.986 ************************************ 00:09:24.986 END TEST nvme_fdp 00:09:24.986 ************************************ 00:09:24.986 04:00:18 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:24.986 04:00:18 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:24.986 04:00:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.986 04:00:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.986 04:00:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.986 ************************************ 00:09:24.986 START TEST nvme_rpc 00:09:24.986 ************************************ 00:09:24.986 04:00:18 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:25.244 * Looking for test storage... 00:09:25.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.244 04:00:18 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.244 --rc genhtml_branch_coverage=1 00:09:25.244 --rc genhtml_function_coverage=1 00:09:25.244 --rc genhtml_legend=1 00:09:25.244 --rc geninfo_all_blocks=1 00:09:25.244 --rc geninfo_unexecuted_blocks=1 00:09:25.244 00:09:25.244 ' 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.244 --rc genhtml_branch_coverage=1 00:09:25.244 --rc genhtml_function_coverage=1 00:09:25.244 --rc genhtml_legend=1 00:09:25.244 --rc geninfo_all_blocks=1 00:09:25.244 --rc geninfo_unexecuted_blocks=1 00:09:25.244 00:09:25.244 ' 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.244 --rc genhtml_branch_coverage=1 00:09:25.244 --rc genhtml_function_coverage=1 00:09:25.244 --rc genhtml_legend=1 00:09:25.244 --rc geninfo_all_blocks=1 00:09:25.244 --rc geninfo_unexecuted_blocks=1 00:09:25.244 00:09:25.244 ' 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:25.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.244 --rc genhtml_branch_coverage=1 00:09:25.244 --rc genhtml_function_coverage=1 00:09:25.244 --rc genhtml_legend=1 00:09:25.244 --rc geninfo_all_blocks=1 00:09:25.244 --rc geninfo_unexecuted_blocks=1 00:09:25.244 00:09:25.244 ' 00:09:25.244 04:00:18 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:25.244 04:00:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:25.244 04:00:18 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:25.245 04:00:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:25.245 04:00:18 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66051 00:09:25.245 04:00:18 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:25.245 04:00:18 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66051 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66051 ']' 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.245 04:00:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.245 04:00:18 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:25.245 [2024-10-13 04:00:18.360675] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:25.245 [2024-10-13 04:00:18.360772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66051 ] 00:09:25.503 [2024-10-13 04:00:18.504715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.503 [2024-10-13 04:00:18.605257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.503 [2024-10-13 04:00:18.605330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.438 04:00:19 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.438 04:00:19 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:26.438 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:26.438 Nvme0n1 00:09:26.438 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:26.438 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:26.696 request: 00:09:26.696 { 00:09:26.696 "bdev_name": "Nvme0n1", 00:09:26.696 "filename": "non_existing_file", 00:09:26.696 "method": "bdev_nvme_apply_firmware", 00:09:26.696 "req_id": 1 00:09:26.696 } 00:09:26.696 Got JSON-RPC error response 00:09:26.696 response: 00:09:26.696 { 00:09:26.696 "code": -32603, 00:09:26.696 "message": "open file failed." 00:09:26.696 } 00:09:26.696 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:26.696 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:26.696 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:26.954 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:26.954 04:00:19 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66051 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66051 ']' 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66051 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66051 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.954 killing process with pid 66051 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66051' 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66051 00:09:26.954 04:00:19 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66051 00:09:28.327 00:09:28.327 real 0m3.094s 00:09:28.327 user 0m6.029s 00:09:28.327 sys 0m0.460s 00:09:28.327 04:00:21 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.327 04:00:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.327 ************************************ 00:09:28.327 END TEST nvme_rpc 00:09:28.327 ************************************ 00:09:28.327 04:00:21 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:28.327 04:00:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:28.327 04:00:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.327 04:00:21 -- common/autotest_common.sh@10 -- # set +x 00:09:28.327 ************************************ 00:09:28.327 START TEST nvme_rpc_timeouts 00:09:28.327 ************************************ 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:28.327 * Looking for test storage... 00:09:28.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.327 04:00:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.327 --rc genhtml_branch_coverage=1 00:09:28.327 --rc genhtml_function_coverage=1 00:09:28.327 --rc genhtml_legend=1 00:09:28.327 --rc geninfo_all_blocks=1 00:09:28.327 --rc geninfo_unexecuted_blocks=1 00:09:28.327 00:09:28.327 ' 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.327 --rc genhtml_branch_coverage=1 00:09:28.327 --rc genhtml_function_coverage=1 00:09:28.327 --rc genhtml_legend=1 00:09:28.327 --rc geninfo_all_blocks=1 00:09:28.327 --rc geninfo_unexecuted_blocks=1 00:09:28.327 00:09:28.327 ' 00:09:28.327 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.327 --rc genhtml_branch_coverage=1 00:09:28.327 --rc genhtml_function_coverage=1 00:09:28.327 --rc genhtml_legend=1 00:09:28.327 --rc geninfo_all_blocks=1 00:09:28.328 --rc geninfo_unexecuted_blocks=1 00:09:28.328 00:09:28.328 ' 00:09:28.328 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.328 --rc genhtml_branch_coverage=1 00:09:28.328 --rc genhtml_function_coverage=1 00:09:28.328 --rc genhtml_legend=1 00:09:28.328 --rc geninfo_all_blocks=1 00:09:28.328 --rc geninfo_unexecuted_blocks=1 00:09:28.328 00:09:28.328 ' 00:09:28.328 04:00:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.328 04:00:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66110 00:09:28.328 04:00:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66110 00:09:28.328 04:00:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66148 00:09:28.328 04:00:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:28.328 04:00:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66148 00:09:28.328 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66148 ']' 00:09:28.328 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.328 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.328 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.328 04:00:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:28.328 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.328 04:00:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:28.328 [2024-10-13 04:00:21.428110] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:09:28.328 [2024-10-13 04:00:21.428232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66148 ] 00:09:28.585 [2024-10-13 04:00:21.579958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:28.585 [2024-10-13 04:00:21.675591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.585 [2024-10-13 04:00:21.675643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.151 04:00:22 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.151 04:00:22 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:29.151 Checking default timeout settings: 00:09:29.151 04:00:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:29.151 04:00:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:29.722 Making settings changes with rpc: 00:09:29.722 04:00:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:29.722 04:00:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:29.722 Check default vs. modified settings: 00:09:29.722 04:00:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:29.722 04:00:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66110 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66110 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:30.008 Setting action_on_timeout is changed as expected. 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66110 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66110 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:30.008 Setting timeout_us is changed as expected. 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66110 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66110 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:30.008 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:30.009 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:30.009 Setting timeout_admin_us is changed as expected. 00:09:30.009 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:30.009 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:30.009 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:30.009 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66110 /tmp/settings_modified_66110 00:09:30.009 04:00:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66148 00:09:30.009 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66148 ']' 00:09:30.009 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66148 00:09:30.009 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:30.009 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.009 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66148 00:09:30.266 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.266 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.266 killing process with pid 66148 00:09:30.266 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66148' 00:09:30.266 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66148 00:09:30.266 04:00:23 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66148 00:09:31.199 RPC TIMEOUT SETTING TEST PASSED. 00:09:31.199 04:00:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:31.199 00:09:31.199 real 0m3.118s 00:09:31.199 user 0m6.112s 00:09:31.199 sys 0m0.470s 00:09:31.199 04:00:24 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.199 04:00:24 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 ************************************ 00:09:31.199 END TEST nvme_rpc_timeouts 00:09:31.199 ************************************ 00:09:31.457 04:00:24 -- spdk/autotest.sh@239 -- # uname -s 00:09:31.457 04:00:24 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:31.457 04:00:24 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:31.457 04:00:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.457 04:00:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.457 04:00:24 -- common/autotest_common.sh@10 -- # set +x 00:09:31.457 ************************************ 00:09:31.457 START TEST sw_hotplug 00:09:31.457 ************************************ 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:31.457 * Looking for test storage... 00:09:31.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.457 04:00:24 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.457 --rc genhtml_branch_coverage=1 00:09:31.457 --rc genhtml_function_coverage=1 00:09:31.457 --rc genhtml_legend=1 00:09:31.457 --rc geninfo_all_blocks=1 00:09:31.457 --rc geninfo_unexecuted_blocks=1 00:09:31.457 00:09:31.457 ' 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.457 --rc genhtml_branch_coverage=1 00:09:31.457 --rc genhtml_function_coverage=1 00:09:31.457 --rc genhtml_legend=1 00:09:31.457 --rc geninfo_all_blocks=1 00:09:31.457 --rc geninfo_unexecuted_blocks=1 00:09:31.457 00:09:31.457 ' 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.457 --rc genhtml_branch_coverage=1 00:09:31.457 --rc genhtml_function_coverage=1 00:09:31.457 --rc genhtml_legend=1 00:09:31.457 --rc geninfo_all_blocks=1 00:09:31.457 --rc geninfo_unexecuted_blocks=1 00:09:31.457 00:09:31.457 ' 00:09:31.457 04:00:24 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.457 --rc genhtml_branch_coverage=1 00:09:31.457 --rc genhtml_function_coverage=1 00:09:31.457 --rc genhtml_legend=1 00:09:31.457 --rc geninfo_all_blocks=1 00:09:31.457 --rc geninfo_unexecuted_blocks=1 00:09:31.457 00:09:31.457 ' 00:09:31.457 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:31.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.972 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:31.972 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:31.972 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:31.972 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:31.972 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:31.972 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:31.972 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:31.972 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:31.972 04:00:24 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:31.972 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:31.972 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:31.972 04:00:24 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.487 Waiting for block devices as requested 00:09:32.487 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.487 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.487 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.487 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.747 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:37.747 04:00:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:37.747 04:00:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.007 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:38.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.007 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:38.267 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:38.527 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.527 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:38.527 04:00:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67001 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:38.527 04:00:31 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:38.527 04:00:31 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:38.527 04:00:31 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:38.527 04:00:31 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:38.527 04:00:31 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:38.527 04:00:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:38.787 Initializing NVMe Controllers 00:09:38.787 Attaching to 0000:00:10.0 00:09:38.787 Attaching to 0000:00:11.0 00:09:38.787 Attached to 0000:00:10.0 00:09:38.787 Attached to 0000:00:11.0 00:09:38.787 Initialization complete. Starting I/O... 00:09:38.787 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:38.787 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:38.787 00:09:39.728 QEMU NVMe Ctrl (12340 ): 2541 I/Os completed (+2541) 00:09:39.728 QEMU NVMe Ctrl (12341 ): 2489 I/Os completed (+2489) 00:09:39.728 00:09:41.104 QEMU NVMe Ctrl (12340 ): 5661 I/Os completed (+3120) 00:09:41.104 QEMU NVMe Ctrl (12341 ): 5580 I/Os completed (+3091) 00:09:41.104 00:09:42.039 QEMU NVMe Ctrl (12340 ): 8764 I/Os completed (+3103) 00:09:42.039 QEMU NVMe Ctrl (12341 ): 8654 I/Os completed (+3074) 00:09:42.039 00:09:42.987 QEMU NVMe Ctrl (12340 ): 11866 I/Os completed (+3102) 00:09:42.987 QEMU NVMe Ctrl (12341 ): 11740 I/Os completed (+3086) 00:09:42.987 00:09:43.921 QEMU NVMe Ctrl (12340 ): 15141 I/Os completed (+3275) 00:09:43.921 QEMU NVMe Ctrl (12341 ): 15011 I/Os completed (+3271) 00:09:43.921 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:44.854 [2024-10-13 04:00:37.660260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:44.854 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:44.854 [2024-10-13 04:00:37.662020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.662158] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.662191] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.662247] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:44.854 [2024-10-13 04:00:37.663877] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.663976] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.664004] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.664055] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 EAL: Cannot open sysfs resource 00:09:44.854 EAL: pci_scan_one(): cannot parse resource 00:09:44.854 EAL: Scan for (pci) bus failed. 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:44.854 [2024-10-13 04:00:37.689498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:44.854 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:44.854 [2024-10-13 04:00:37.690845] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.690883] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.690900] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.690913] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:44.854 [2024-10-13 04:00:37.692253] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.692282] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.692295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 [2024-10-13 04:00:37.692307] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.854 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:44.854 EAL: Scan for (pci) bus failed. 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:44.854 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:44.854 Attaching to 0000:00:10.0 00:09:44.854 Attached to 0000:00:10.0 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:44.854 04:00:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:44.854 Attaching to 0000:00:11.0 00:09:44.854 Attached to 0000:00:11.0 00:09:45.785 QEMU NVMe Ctrl (12340 ): 3140 I/Os completed (+3140) 00:09:45.785 QEMU NVMe Ctrl (12341 ): 2864 I/Os completed (+2864) 00:09:45.785 00:09:46.719 QEMU NVMe Ctrl (12340 ): 6365 I/Os completed (+3225) 00:09:46.719 QEMU NVMe Ctrl (12341 ): 6160 I/Os completed (+3296) 00:09:46.719 00:09:48.093 QEMU NVMe Ctrl (12340 ): 9457 I/Os completed (+3092) 00:09:48.093 QEMU NVMe Ctrl (12341 ): 9311 I/Os completed (+3151) 00:09:48.093 00:09:49.031 QEMU NVMe Ctrl (12340 ): 12589 I/Os completed (+3132) 00:09:49.031 QEMU NVMe Ctrl (12341 ): 12570 I/Os completed (+3259) 00:09:49.031 00:09:49.967 QEMU NVMe Ctrl (12340 ): 15690 I/Os completed (+3101) 00:09:49.967 QEMU NVMe Ctrl (12341 ): 15762 I/Os completed (+3192) 00:09:49.967 00:09:50.899 QEMU NVMe Ctrl (12340 ): 18736 I/Os completed (+3046) 00:09:50.899 QEMU NVMe Ctrl (12341 ): 18809 I/Os completed (+3047) 00:09:50.899 00:09:51.833 QEMU NVMe Ctrl (12340 ): 22183 I/Os completed (+3447) 00:09:51.833 QEMU NVMe Ctrl (12341 ): 22271 I/Os completed (+3462) 00:09:51.833 00:09:52.767 QEMU NVMe Ctrl (12340 ): 25407 I/Os completed (+3224) 00:09:52.767 QEMU NVMe Ctrl (12341 ): 25477 I/Os completed (+3206) 00:09:52.767 00:09:53.764 QEMU NVMe Ctrl (12340 ): 29096 I/Os completed (+3689) 00:09:53.764 QEMU NVMe Ctrl (12341 ): 29174 I/Os completed (+3697) 00:09:53.764 00:09:54.696 QEMU NVMe Ctrl (12340 ): 32808 I/Os completed (+3712) 00:09:54.696 QEMU NVMe Ctrl (12341 ): 32890 I/Os completed (+3716) 00:09:54.696 00:09:56.069 QEMU NVMe Ctrl (12340 ): 36194 I/Os completed (+3386) 00:09:56.069 QEMU NVMe Ctrl (12341 ): 36303 I/Os completed (+3413) 00:09:56.069 00:09:57.003 QEMU NVMe Ctrl (12340 ): 39682 I/Os completed (+3488) 00:09:57.003 QEMU NVMe Ctrl (12341 ): 39802 I/Os completed (+3499) 00:09:57.003 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.003 [2024-10-13 04:00:49.946346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:57.003 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:57.003 [2024-10-13 04:00:49.947272] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.947305] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.947321] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.947336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:57.003 [2024-10-13 04:00:49.948891] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.948930] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.948941] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.948953] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 EAL: Cannot open sysfs resource 00:09:57.003 EAL: pci_scan_one(): cannot parse resource 00:09:57.003 EAL: Scan for (pci) bus failed. 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:57.003 [2024-10-13 04:00:49.967832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:57.003 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:57.003 [2024-10-13 04:00:49.968701] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.968732] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.968750] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.968763] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:57.003 [2024-10-13 04:00:49.970119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.970154] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.970166] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 [2024-10-13 04:00:49.970179] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:57.003 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:57.003 EAL: Scan for (pci) bus failed. 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:57.003 04:00:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:57.003 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:57.003 Attaching to 0000:00:10.0 00:09:57.004 Attached to 0000:00:10.0 00:09:57.262 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:57.262 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:57.262 04:00:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:57.262 Attaching to 0000:00:11.0 00:09:57.262 Attached to 0000:00:11.0 00:09:57.903 QEMU NVMe Ctrl (12340 ): 2635 I/Os completed (+2635) 00:09:57.903 QEMU NVMe Ctrl (12341 ): 2309 I/Os completed (+2309) 00:09:57.903 00:09:58.837 QEMU NVMe Ctrl (12340 ): 6310 I/Os completed (+3675) 00:09:58.837 QEMU NVMe Ctrl (12341 ): 5979 I/Os completed (+3670) 00:09:58.837 00:09:59.771 QEMU NVMe Ctrl (12340 ): 9832 I/Os completed (+3522) 00:09:59.771 QEMU NVMe Ctrl (12341 ): 9540 I/Os completed (+3561) 00:09:59.771 00:10:00.706 QEMU NVMe Ctrl (12340 ): 13010 I/Os completed (+3178) 00:10:00.706 QEMU NVMe Ctrl (12341 ): 12876 I/Os completed (+3336) 00:10:00.706 00:10:02.082 QEMU NVMe Ctrl (12340 ): 16206 I/Os completed (+3196) 00:10:02.082 QEMU NVMe Ctrl (12341 ): 15990 I/Os completed (+3114) 00:10:02.082 00:10:02.712 QEMU NVMe Ctrl (12340 ): 19667 I/Os completed (+3461) 00:10:02.712 QEMU NVMe Ctrl (12341 ): 19477 I/Os completed (+3487) 00:10:02.712 00:10:04.085 QEMU NVMe Ctrl (12340 ): 22801 I/Os completed (+3134) 00:10:04.085 QEMU NVMe Ctrl (12341 ): 22549 I/Os completed (+3072) 00:10:04.085 00:10:05.020 QEMU NVMe Ctrl (12340 ): 26402 I/Os completed (+3601) 00:10:05.020 QEMU NVMe Ctrl (12341 ): 26145 I/Os completed (+3596) 00:10:05.020 00:10:05.956 QEMU NVMe Ctrl (12340 ): 30013 I/Os completed (+3611) 00:10:05.956 QEMU NVMe Ctrl (12341 ): 29747 I/Os completed (+3602) 00:10:05.956 00:10:06.890 QEMU NVMe Ctrl (12340 ): 33197 I/Os completed (+3184) 00:10:06.890 QEMU NVMe Ctrl (12341 ): 32922 I/Os completed (+3175) 00:10:06.890 00:10:07.824 QEMU NVMe Ctrl (12340 ): 36676 I/Os completed (+3479) 00:10:07.824 QEMU NVMe Ctrl (12341 ): 36398 I/Os completed (+3476) 00:10:07.824 00:10:08.764 QEMU NVMe Ctrl (12340 ): 40427 I/Os completed (+3751) 00:10:08.764 QEMU NVMe Ctrl (12341 ): 40158 I/Os completed (+3760) 00:10:08.764 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.334 [2024-10-13 04:01:02.213295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:09.334 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:09.334 [2024-10-13 04:01:02.214281] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.214393] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.214425] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.214481] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.334 [2024-10-13 04:01:02.216122] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.216214] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.216239] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.216290] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:09.334 [2024-10-13 04:01:02.233112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:09.334 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:09.334 [2024-10-13 04:01:02.234082] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.234179] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.234208] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.234233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.334 [2024-10-13 04:01:02.235643] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.235727] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.235758] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 [2024-10-13 04:01:02.235824] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.334 Attaching to 0000:00:10.0 00:10:09.334 Attached to 0000:00:10.0 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.334 04:01:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:09.334 Attaching to 0000:00:11.0 00:10:09.334 Attached to 0000:00:11.0 00:10:09.334 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:09.334 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:09.334 [2024-10-13 04:01:02.478908] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:21.539 04:01:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:21.540 04:01:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.540 04:01:14 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.82 00:10:21.540 04:01:14 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.82 00:10:21.540 04:01:14 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:21.540 04:01:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.82 00:10:21.540 04:01:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.82 2 00:10:21.540 remove_attach_helper took 42.82s to complete (handling 2 nvme drive(s)) 04:01:14 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67001 00:10:28.102 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67001) - No such process 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67001 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67550 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67550 00:10:28.102 04:01:20 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:28.102 04:01:20 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67550 ']' 00:10:28.102 04:01:20 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.102 04:01:20 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.102 04:01:20 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.102 04:01:20 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.102 04:01:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.102 [2024-10-13 04:01:20.557155] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:10:28.102 [2024-10-13 04:01:20.557278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67550 ] 00:10:28.102 [2024-10-13 04:01:20.703817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.102 [2024-10-13 04:01:20.797497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:28.360 04:01:21 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:28.360 04:01:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.921 04:01:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.921 04:01:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.921 04:01:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:34.921 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:34.921 [2024-10-13 04:01:27.498299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:34.921 [2024-10-13 04:01:27.499597] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.921 [2024-10-13 04:01:27.499644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.921 [2024-10-13 04:01:27.499655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.921 [2024-10-13 04:01:27.499675] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.921 [2024-10-13 04:01:27.499683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.921 [2024-10-13 04:01:27.499690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.921 [2024-10-13 04:01:27.499697] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.921 [2024-10-13 04:01:27.499705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.922 [2024-10-13 04:01:27.499712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.922 [2024-10-13 04:01:27.499723] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.922 [2024-10-13 04:01:27.499729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.922 [2024-10-13 04:01:27.499737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.922 [2024-10-13 04:01:27.898312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:34.922 [2024-10-13 04:01:27.899663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.922 [2024-10-13 04:01:27.899693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.922 [2024-10-13 04:01:27.899704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.922 [2024-10-13 04:01:27.899720] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.922 [2024-10-13 04:01:27.899729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.922 [2024-10-13 04:01:27.899736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.922 [2024-10-13 04:01:27.899745] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.922 [2024-10-13 04:01:27.899752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.922 [2024-10-13 04:01:27.899760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.922 [2024-10-13 04:01:27.899767] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.922 [2024-10-13 04:01:27.899775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.922 [2024-10-13 04:01:27.899781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.922 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:34.922 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.922 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.922 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.922 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.922 04:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.922 04:01:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.922 04:01:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.922 04:01:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.922 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:34.922 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.182 04:01:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.378 04:01:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.378 04:01:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.378 04:01:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.378 [2024-10-13 04:01:40.298542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:47.378 [2024-10-13 04:01:40.300013] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.378 [2024-10-13 04:01:40.300046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.378 [2024-10-13 04:01:40.300057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.378 [2024-10-13 04:01:40.300074] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.378 [2024-10-13 04:01:40.300081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.378 [2024-10-13 04:01:40.300090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.378 [2024-10-13 04:01:40.300097] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.378 [2024-10-13 04:01:40.300105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.378 [2024-10-13 04:01:40.300111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.378 [2024-10-13 04:01:40.300119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.378 [2024-10-13 04:01:40.300125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.378 [2024-10-13 04:01:40.300133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.378 04:01:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.378 04:01:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.378 04:01:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:47.378 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.985 [2024-10-13 04:01:40.798547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:47.985 [2024-10-13 04:01:40.799781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.985 [2024-10-13 04:01:40.799813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.985 [2024-10-13 04:01:40.799826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.985 [2024-10-13 04:01:40.799840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.985 [2024-10-13 04:01:40.799849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.985 [2024-10-13 04:01:40.799856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.985 [2024-10-13 04:01:40.799864] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.985 [2024-10-13 04:01:40.799871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.985 [2024-10-13 04:01:40.799879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.985 [2024-10-13 04:01:40.799886] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.985 [2024-10-13 04:01:40.799894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.985 [2024-10-13 04:01:40.799900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.985 04:01:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.985 04:01:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.985 04:01:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.985 04:01:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:47.985 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:47.985 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.985 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.985 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.985 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:47.985 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.267 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.267 04:01:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.464 04:01:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.464 04:01:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.464 04:01:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.464 04:01:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.464 04:01:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.464 [2024-10-13 04:01:53.198787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:00.464 [2024-10-13 04:01:53.200032] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.200068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.200079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.464 [2024-10-13 04:01:53.200096] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.200103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.200114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.464 [2024-10-13 04:01:53.200121] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.200129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.200136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.464 [2024-10-13 04:01:53.200144] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.200151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.200159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.464 04:01:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:00.464 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.464 [2024-10-13 04:01:53.598792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:00.464 [2024-10-13 04:01:53.600037] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.600070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.600081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.464 [2024-10-13 04:01:53.600096] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.600104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.600111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.464 [2024-10-13 04:01:53.600119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.600126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.600135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.464 [2024-10-13 04:01:53.600141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.464 [2024-10-13 04:01:53.600149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.464 [2024-10-13 04:01:53.600155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.722 04:01:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.722 04:01:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.722 04:01:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.722 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.980 04:01:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.184 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.184 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.184 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.184 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.184 04:02:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.62 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.62 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.62 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.62 2 00:11:13.184 remove_attach_helper took 44.62s to complete (handling 2 nvme drive(s)) 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:13.184 04:02:06 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:13.184 04:02:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.747 04:02:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.747 04:02:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.747 04:02:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.747 [2024-10-13 04:02:12.146575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:19.747 [2024-10-13 04:02:12.147881] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.147934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.147944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 [2024-10-13 04:02:12.147961] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.147969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.147977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 [2024-10-13 04:02:12.147984] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.147991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.147997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 [2024-10-13 04:02:12.148006] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.148012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.148021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 [2024-10-13 04:02:12.546575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:19.747 [2024-10-13 04:02:12.547547] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.547579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.547591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 [2024-10-13 04:02:12.547606] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.547625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.547632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 [2024-10-13 04:02:12.547653] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.547660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.547669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 [2024-10-13 04:02:12.547677] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.747 [2024-10-13 04:02:12.547685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.747 [2024-10-13 04:02:12.547691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.747 04:02:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.747 04:02:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.747 04:02:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.747 04:02:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.017 04:02:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.017 04:02:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.017 04:02:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.017 [2024-10-13 04:02:24.946847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:32.017 [2024-10-13 04:02:24.948253] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.017 [2024-10-13 04:02:24.948291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.017 [2024-10-13 04:02:24.948301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.017 [2024-10-13 04:02:24.948317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.017 [2024-10-13 04:02:24.948325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.017 [2024-10-13 04:02:24.948333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.017 [2024-10-13 04:02:24.948341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.017 [2024-10-13 04:02:24.948349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.017 [2024-10-13 04:02:24.948355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.017 [2024-10-13 04:02:24.948364] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.017 [2024-10-13 04:02:24.948371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.017 [2024-10-13 04:02:24.948379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.017 04:02:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.017 04:02:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.017 04:02:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.017 04:02:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.017 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:32.017 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.278 [2024-10-13 04:02:25.346855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:32.278 [2024-10-13 04:02:25.348157] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.278 [2024-10-13 04:02:25.348191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.278 [2024-10-13 04:02:25.348201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.278 [2024-10-13 04:02:25.348216] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.278 [2024-10-13 04:02:25.348227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.278 [2024-10-13 04:02:25.348234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.278 [2024-10-13 04:02:25.348243] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.278 [2024-10-13 04:02:25.348250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.278 [2024-10-13 04:02:25.348258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.278 [2024-10-13 04:02:25.348265] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.278 [2024-10-13 04:02:25.348273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.278 [2024-10-13 04:02:25.348279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.539 04:02:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.539 04:02:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.539 04:02:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.539 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.800 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.800 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.800 04:02:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.036 04:02:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.036 04:02:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.036 04:02:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.036 [2024-10-13 04:02:37.847065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.036 [2024-10-13 04:02:37.848267] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.036 [2024-10-13 04:02:37.848296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.036 [2024-10-13 04:02:37.848306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.036 [2024-10-13 04:02:37.848322] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.036 [2024-10-13 04:02:37.848330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.036 [2024-10-13 04:02:37.848338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.036 [2024-10-13 04:02:37.848345] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.036 [2024-10-13 04:02:37.848354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.036 [2024-10-13 04:02:37.848361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.036 [2024-10-13 04:02:37.848369] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.036 [2024-10-13 04:02:37.848375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.036 [2024-10-13 04:02:37.848383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.036 04:02:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.036 04:02:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.036 04:02:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:45.036 04:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:45.297 [2024-10-13 04:02:38.247075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:45.297 [2024-10-13 04:02:38.248352] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.297 [2024-10-13 04:02:38.248387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.297 [2024-10-13 04:02:38.248400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.297 [2024-10-13 04:02:38.248414] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.297 [2024-10-13 04:02:38.248423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.297 [2024-10-13 04:02:38.248431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.297 [2024-10-13 04:02:38.248440] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.297 [2024-10-13 04:02:38.248447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.297 [2024-10-13 04:02:38.248456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.297 [2024-10-13 04:02:38.248463] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.297 [2024-10-13 04:02:38.248472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.297 [2024-10-13 04:02:38.248479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.297 04:02:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.297 04:02:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.297 04:02:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:45.297 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:45.556 04:02:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.64 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.64 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:57.785 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:57.785 04:02:50 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67550 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67550 ']' 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67550 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67550 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.785 killing process with pid 67550 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67550' 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67550 00:11:57.785 04:02:50 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67550 00:11:59.169 04:02:51 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:59.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:59.740 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:59.740 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:59.740 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:59.740 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:59.740 00:11:59.740 real 2m28.469s 00:11:59.740 user 1m50.487s 00:11:59.740 sys 0m16.551s 00:11:59.740 04:02:52 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.740 ************************************ 00:11:59.740 04:02:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:59.740 END TEST sw_hotplug 00:11:59.740 ************************************ 00:12:00.002 04:02:52 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:00.002 04:02:52 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:00.002 04:02:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.002 04:02:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.002 04:02:52 -- common/autotest_common.sh@10 -- # set +x 00:12:00.002 ************************************ 00:12:00.002 START TEST nvme_xnvme 00:12:00.002 ************************************ 00:12:00.002 04:02:52 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:00.002 * Looking for test storage... 00:12:00.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:00.002 04:02:52 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:00.002 04:02:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:00.002 04:02:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.002 --rc genhtml_branch_coverage=1 00:12:00.002 --rc genhtml_function_coverage=1 00:12:00.002 --rc genhtml_legend=1 00:12:00.002 --rc geninfo_all_blocks=1 00:12:00.002 --rc geninfo_unexecuted_blocks=1 00:12:00.002 00:12:00.002 ' 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.002 --rc genhtml_branch_coverage=1 00:12:00.002 --rc genhtml_function_coverage=1 00:12:00.002 --rc genhtml_legend=1 00:12:00.002 --rc geninfo_all_blocks=1 00:12:00.002 --rc geninfo_unexecuted_blocks=1 00:12:00.002 00:12:00.002 ' 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.002 --rc genhtml_branch_coverage=1 00:12:00.002 --rc genhtml_function_coverage=1 00:12:00.002 --rc genhtml_legend=1 00:12:00.002 --rc geninfo_all_blocks=1 00:12:00.002 --rc geninfo_unexecuted_blocks=1 00:12:00.002 00:12:00.002 ' 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.002 --rc genhtml_branch_coverage=1 00:12:00.002 --rc genhtml_function_coverage=1 00:12:00.002 --rc genhtml_legend=1 00:12:00.002 --rc geninfo_all_blocks=1 00:12:00.002 --rc geninfo_unexecuted_blocks=1 00:12:00.002 00:12:00.002 ' 00:12:00.002 04:02:53 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.002 04:02:53 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.002 04:02:53 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.002 04:02:53 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.002 04:02:53 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.002 04:02:53 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:00.002 04:02:53 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.002 04:02:53 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.002 04:02:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:00.002 ************************************ 00:12:00.002 START TEST xnvme_to_malloc_dd_copy 00:12:00.002 ************************************ 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:00.002 04:02:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:00.002 { 00:12:00.002 "subsystems": [ 00:12:00.002 { 00:12:00.002 "subsystem": "bdev", 00:12:00.002 "config": [ 00:12:00.002 { 00:12:00.002 "params": { 00:12:00.002 "block_size": 512, 00:12:00.002 "num_blocks": 2097152, 00:12:00.002 "name": "malloc0" 00:12:00.002 }, 00:12:00.002 "method": "bdev_malloc_create" 00:12:00.002 }, 00:12:00.003 { 00:12:00.003 "params": { 00:12:00.003 "io_mechanism": "libaio", 00:12:00.003 "filename": "/dev/nullb0", 00:12:00.003 "name": "null0" 00:12:00.003 }, 00:12:00.003 "method": "bdev_xnvme_create" 00:12:00.003 }, 00:12:00.003 { 00:12:00.003 "method": "bdev_wait_for_examine" 00:12:00.003 } 00:12:00.003 ] 00:12:00.003 } 00:12:00.003 ] 00:12:00.003 } 00:12:00.262 [2024-10-13 04:02:53.182110] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:00.262 [2024-10-13 04:02:53.182244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68909 ] 00:12:00.262 [2024-10-13 04:02:53.334727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.522 [2024-10-13 04:02:53.424675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.434  [2024-10-13T04:02:56.536Z] Copying: 293/1024 [MB] (293 MBps) [2024-10-13T04:02:57.479Z] Copying: 587/1024 [MB] (294 MBps) [2024-10-13T04:02:57.740Z] Copying: 880/1024 [MB] (293 MBps) [2024-10-13T04:02:59.654Z] Copying: 1024/1024 [MB] (average 294 MBps) 00:12:06.494 00:12:06.494 04:02:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:06.494 04:02:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:06.494 04:02:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:06.494 04:02:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:06.756 { 00:12:06.756 "subsystems": [ 00:12:06.756 { 00:12:06.756 "subsystem": "bdev", 00:12:06.756 "config": [ 00:12:06.756 { 00:12:06.756 "params": { 00:12:06.756 "block_size": 512, 00:12:06.756 "num_blocks": 2097152, 00:12:06.756 "name": "malloc0" 00:12:06.756 }, 00:12:06.756 "method": "bdev_malloc_create" 00:12:06.756 }, 00:12:06.756 { 00:12:06.756 "params": { 00:12:06.756 "io_mechanism": "libaio", 00:12:06.756 "filename": "/dev/nullb0", 00:12:06.756 "name": "null0" 00:12:06.756 }, 00:12:06.756 "method": "bdev_xnvme_create" 00:12:06.756 }, 00:12:06.756 { 00:12:06.756 "method": "bdev_wait_for_examine" 00:12:06.756 } 00:12:06.756 ] 00:12:06.756 } 00:12:06.756 ] 00:12:06.756 } 00:12:06.756 [2024-10-13 04:02:59.710196] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:06.756 [2024-10-13 04:02:59.710314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68996 ] 00:12:06.756 [2024-10-13 04:02:59.857597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.017 [2024-10-13 04:02:59.975880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.933  [2024-10-13T04:03:03.493Z] Copying: 226/1024 [MB] (226 MBps) [2024-10-13T04:03:04.087Z] Copying: 491/1024 [MB] (265 MBps) [2024-10-13T04:03:05.026Z] Copying: 793/1024 [MB] (302 MBps) [2024-10-13T04:03:06.934Z] Copying: 1024/1024 [MB] (average 272 MBps) 00:12:13.774 00:12:13.774 04:03:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:13.774 04:03:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:13.774 04:03:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:13.774 04:03:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:13.774 04:03:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:13.774 04:03:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:13.774 { 00:12:13.774 "subsystems": [ 00:12:13.774 { 00:12:13.774 "subsystem": "bdev", 00:12:13.774 "config": [ 00:12:13.774 { 00:12:13.774 "params": { 00:12:13.774 "block_size": 512, 00:12:13.774 "num_blocks": 2097152, 00:12:13.774 "name": "malloc0" 00:12:13.774 }, 00:12:13.774 "method": "bdev_malloc_create" 00:12:13.774 }, 00:12:13.774 { 00:12:13.774 "params": { 00:12:13.774 "io_mechanism": "io_uring", 00:12:13.774 "filename": "/dev/nullb0", 00:12:13.774 "name": "null0" 00:12:13.774 }, 00:12:13.774 "method": "bdev_xnvme_create" 00:12:13.774 }, 00:12:13.774 { 00:12:13.774 "method": "bdev_wait_for_examine" 00:12:13.774 } 00:12:13.774 ] 00:12:13.774 } 00:12:13.774 ] 00:12:13.774 } 00:12:13.774 [2024-10-13 04:03:06.792447] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:13.774 [2024-10-13 04:03:06.792559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69078 ] 00:12:14.032 [2024-10-13 04:03:06.941944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.032 [2024-10-13 04:03:07.022100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.934  [2024-10-13T04:03:10.032Z] Copying: 311/1024 [MB] (311 MBps) [2024-10-13T04:03:10.970Z] Copying: 622/1024 [MB] (311 MBps) [2024-10-13T04:03:11.228Z] Copying: 933/1024 [MB] (311 MBps) [2024-10-13T04:03:13.131Z] Copying: 1024/1024 [MB] (average 311 MBps) 00:12:19.971 00:12:19.971 04:03:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:19.971 04:03:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:19.971 04:03:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:19.971 04:03:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:19.971 { 00:12:19.971 "subsystems": [ 00:12:19.971 { 00:12:19.971 "subsystem": "bdev", 00:12:19.971 "config": [ 00:12:19.971 { 00:12:19.971 "params": { 00:12:19.971 "block_size": 512, 00:12:19.971 "num_blocks": 2097152, 00:12:19.971 "name": "malloc0" 00:12:19.971 }, 00:12:19.971 "method": "bdev_malloc_create" 00:12:19.971 }, 00:12:19.971 { 00:12:19.971 "params": { 00:12:19.971 "io_mechanism": "io_uring", 00:12:19.971 "filename": "/dev/nullb0", 00:12:19.971 "name": "null0" 00:12:19.971 }, 00:12:19.971 "method": "bdev_xnvme_create" 00:12:19.971 }, 00:12:19.971 { 00:12:19.971 "method": "bdev_wait_for_examine" 00:12:19.971 } 00:12:19.971 ] 00:12:19.971 } 00:12:19.971 ] 00:12:19.971 } 00:12:19.971 [2024-10-13 04:03:13.033354] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:19.971 [2024-10-13 04:03:13.033467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69154 ] 00:12:20.229 [2024-10-13 04:03:13.182058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.229 [2024-10-13 04:03:13.261517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.130  [2024-10-13T04:03:16.242Z] Copying: 316/1024 [MB] (316 MBps) [2024-10-13T04:03:17.189Z] Copying: 633/1024 [MB] (317 MBps) [2024-10-13T04:03:17.447Z] Copying: 951/1024 [MB] (317 MBps) [2024-10-13T04:03:19.351Z] Copying: 1024/1024 [MB] (average 317 MBps) 00:12:26.191 00:12:26.191 04:03:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:26.191 04:03:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:26.191 ************************************ 00:12:26.191 END TEST xnvme_to_malloc_dd_copy 00:12:26.191 ************************************ 00:12:26.191 00:12:26.191 real 0m26.061s 00:12:26.191 user 0m22.839s 00:12:26.191 sys 0m2.689s 00:12:26.191 04:03:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:26.191 04:03:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:26.191 04:03:19 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:26.191 04:03:19 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:26.191 04:03:19 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:26.191 04:03:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.191 ************************************ 00:12:26.191 START TEST xnvme_bdevperf 00:12:26.191 ************************************ 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:26.191 04:03:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:26.191 { 00:12:26.191 "subsystems": [ 00:12:26.191 { 00:12:26.191 "subsystem": "bdev", 00:12:26.191 "config": [ 00:12:26.191 { 00:12:26.191 "params": { 00:12:26.191 "io_mechanism": "libaio", 00:12:26.191 "filename": "/dev/nullb0", 00:12:26.191 "name": "null0" 00:12:26.191 }, 00:12:26.191 "method": "bdev_xnvme_create" 00:12:26.191 }, 00:12:26.191 { 00:12:26.191 "method": "bdev_wait_for_examine" 00:12:26.191 } 00:12:26.191 ] 00:12:26.191 } 00:12:26.191 ] 00:12:26.191 } 00:12:26.191 [2024-10-13 04:03:19.261657] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:26.191 [2024-10-13 04:03:19.261765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69252 ] 00:12:26.450 [2024-10-13 04:03:19.409892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.450 [2024-10-13 04:03:19.490542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.709 Running I/O for 5 seconds... 00:12:28.578 201984.00 IOPS, 789.00 MiB/s [2024-10-13T04:03:23.118Z] 202432.00 IOPS, 790.75 MiB/s [2024-10-13T04:03:24.059Z] 202688.00 IOPS, 791.75 MiB/s [2024-10-13T04:03:24.998Z] 202800.00 IOPS, 792.19 MiB/s 00:12:31.838 Latency(us) 00:12:31.838 [2024-10-13T04:03:24.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.838 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:31.838 null0 : 5.00 202868.54 792.46 0.00 0.00 313.26 111.85 1581.69 00:12:31.838 [2024-10-13T04:03:24.998Z] =================================================================================================================== 00:12:31.838 [2024-10-13T04:03:24.998Z] Total : 202868.54 792.46 0.00 0.00 313.26 111.85 1581.69 00:12:32.407 04:03:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:32.407 04:03:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:32.407 04:03:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:32.407 04:03:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:32.407 04:03:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:32.407 04:03:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:32.407 { 00:12:32.407 "subsystems": [ 00:12:32.407 { 00:12:32.407 "subsystem": "bdev", 00:12:32.407 "config": [ 00:12:32.407 { 00:12:32.407 "params": { 00:12:32.407 "io_mechanism": "io_uring", 00:12:32.407 "filename": "/dev/nullb0", 00:12:32.407 "name": "null0" 00:12:32.407 }, 00:12:32.407 "method": "bdev_xnvme_create" 00:12:32.407 }, 00:12:32.407 { 00:12:32.407 "method": "bdev_wait_for_examine" 00:12:32.407 } 00:12:32.407 ] 00:12:32.407 } 00:12:32.407 ] 00:12:32.407 } 00:12:32.407 [2024-10-13 04:03:25.329688] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:32.407 [2024-10-13 04:03:25.329943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69321 ] 00:12:32.407 [2024-10-13 04:03:25.477386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.407 [2024-10-13 04:03:25.557394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.665 Running I/O for 5 seconds... 00:12:34.979 230336.00 IOPS, 899.75 MiB/s [2024-10-13T04:03:29.110Z] 230432.00 IOPS, 900.12 MiB/s [2024-10-13T04:03:30.061Z] 229653.33 IOPS, 897.08 MiB/s [2024-10-13T04:03:30.999Z] 229952.00 IOPS, 898.25 MiB/s 00:12:37.839 Latency(us) 00:12:37.839 [2024-10-13T04:03:30.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.839 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.839 null0 : 5.00 229794.54 897.63 0.00 0.00 276.16 148.87 1556.48 00:12:37.839 [2024-10-13T04:03:30.999Z] =================================================================================================================== 00:12:37.839 [2024-10-13T04:03:30.999Z] Total : 229794.54 897.63 0.00 0.00 276.16 148.87 1556.48 00:12:38.408 04:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:38.408 04:03:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:38.408 ************************************ 00:12:38.408 END TEST xnvme_bdevperf 00:12:38.408 ************************************ 00:12:38.408 00:12:38.408 real 0m12.146s 00:12:38.408 user 0m9.775s 00:12:38.408 sys 0m2.126s 00:12:38.408 04:03:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.408 04:03:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:38.408 ************************************ 00:12:38.408 END TEST nvme_xnvme 00:12:38.408 ************************************ 00:12:38.408 00:12:38.408 real 0m38.463s 00:12:38.408 user 0m32.739s 00:12:38.409 sys 0m4.929s 00:12:38.409 04:03:31 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.409 04:03:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.409 04:03:31 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:38.409 04:03:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:38.409 04:03:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.409 04:03:31 -- common/autotest_common.sh@10 -- # set +x 00:12:38.409 ************************************ 00:12:38.409 START TEST blockdev_xnvme 00:12:38.409 ************************************ 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:38.409 * Looking for test storage... 00:12:38.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.409 04:03:31 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.409 --rc genhtml_branch_coverage=1 00:12:38.409 --rc genhtml_function_coverage=1 00:12:38.409 --rc genhtml_legend=1 00:12:38.409 --rc geninfo_all_blocks=1 00:12:38.409 --rc geninfo_unexecuted_blocks=1 00:12:38.409 00:12:38.409 ' 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.409 --rc genhtml_branch_coverage=1 00:12:38.409 --rc genhtml_function_coverage=1 00:12:38.409 --rc genhtml_legend=1 00:12:38.409 --rc geninfo_all_blocks=1 00:12:38.409 --rc geninfo_unexecuted_blocks=1 00:12:38.409 00:12:38.409 ' 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.409 --rc genhtml_branch_coverage=1 00:12:38.409 --rc genhtml_function_coverage=1 00:12:38.409 --rc genhtml_legend=1 00:12:38.409 --rc geninfo_all_blocks=1 00:12:38.409 --rc geninfo_unexecuted_blocks=1 00:12:38.409 00:12:38.409 ' 00:12:38.409 04:03:31 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:38.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.409 --rc genhtml_branch_coverage=1 00:12:38.409 --rc genhtml_function_coverage=1 00:12:38.409 --rc genhtml_legend=1 00:12:38.409 --rc geninfo_all_blocks=1 00:12:38.409 --rc geninfo_unexecuted_blocks=1 00:12:38.409 00:12:38.409 ' 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:38.409 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69463 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69463 00:12:38.668 04:03:31 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69463 ']' 00:12:38.668 04:03:31 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:38.668 04:03:31 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.668 04:03:31 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.668 04:03:31 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.668 04:03:31 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.668 04:03:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.668 [2024-10-13 04:03:31.649895] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:38.668 [2024-10-13 04:03:31.650016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69463 ] 00:12:38.668 [2024-10-13 04:03:31.796558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.927 [2024-10-13 04:03:31.877481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.496 04:03:32 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.496 04:03:32 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:39.496 04:03:32 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:39.496 04:03:32 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:39.496 04:03:32 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:39.496 04:03:32 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:39.496 04:03:32 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:39.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.014 Waiting for block devices as requested 00:12:40.014 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.014 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.014 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:40.014 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:45.293 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.293 04:03:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.293 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:45.293 nvme0n1 00:12:45.293 nvme1n1 00:12:45.293 nvme2n1 00:12:45.293 nvme2n2 00:12:45.293 nvme2n3 00:12:45.294 nvme3n1 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "731ec076-9da6-4114-bce3-39a872765f24"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "731ec076-9da6-4114-bce3-39a872765f24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9aa0ae21-7d9d-47a7-be56-eda076cf5762"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9aa0ae21-7d9d-47a7-be56-eda076cf5762",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1659be5a-fdb2-411d-bc52-b669e9be3dd1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1659be5a-fdb2-411d-bc52-b669e9be3dd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e4a02c47-d53d-4b76-a04d-cedb49b88e1c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e4a02c47-d53d-4b76-a04d-cedb49b88e1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d07d8ce3-2e29-4fa3-8c67-65e2e743fd43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d07d8ce3-2e29-4fa3-8c67-65e2e743fd43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "93db02c5-d487-4559-8a39-14fc8b6967d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "93db02c5-d487-4559-8a39-14fc8b6967d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:45.294 04:03:38 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69463 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69463 ']' 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69463 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69463 00:12:45.294 killing process with pid 69463 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69463' 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69463 00:12:45.294 04:03:38 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69463 00:12:46.672 04:03:39 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:46.672 04:03:39 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:46.672 04:03:39 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:46.672 04:03:39 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.672 04:03:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.672 ************************************ 00:12:46.672 START TEST bdev_hello_world 00:12:46.672 ************************************ 00:12:46.672 04:03:39 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:46.672 [2024-10-13 04:03:39.675518] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:46.672 [2024-10-13 04:03:39.675669] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69822 ] 00:12:46.672 [2024-10-13 04:03:39.825716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.930 [2024-10-13 04:03:39.900970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.218 [2024-10-13 04:03:40.188707] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:47.218 [2024-10-13 04:03:40.188749] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:47.218 [2024-10-13 04:03:40.188761] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:47.218 [2024-10-13 04:03:40.190221] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:47.218 [2024-10-13 04:03:40.190547] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:47.218 [2024-10-13 04:03:40.190564] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:47.218 [2024-10-13 04:03:40.190717] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:47.218 00:12:47.218 [2024-10-13 04:03:40.190731] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:47.801 ************************************ 00:12:47.801 END TEST bdev_hello_world 00:12:47.801 ************************************ 00:12:47.801 00:12:47.801 real 0m1.121s 00:12:47.801 user 0m0.859s 00:12:47.801 sys 0m0.151s 00:12:47.801 04:03:40 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.801 04:03:40 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:47.801 04:03:40 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:47.801 04:03:40 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:47.801 04:03:40 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.801 04:03:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.801 ************************************ 00:12:47.801 START TEST bdev_bounds 00:12:47.801 ************************************ 00:12:47.801 Process bdevio pid: 69853 00:12:47.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69853 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69853' 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69853 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69853 ']' 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:47.801 04:03:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:47.801 [2024-10-13 04:03:40.853340] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:47.801 [2024-10-13 04:03:40.853462] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69853 ] 00:12:48.059 [2024-10-13 04:03:41.002802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:48.059 [2024-10-13 04:03:41.084990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.059 [2024-10-13 04:03:41.085206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.059 [2024-10-13 04:03:41.085284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.625 04:03:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.625 04:03:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:48.625 04:03:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:48.625 I/O targets: 00:12:48.625 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:48.625 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:48.625 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:48.625 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:48.625 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:48.625 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:48.625 00:12:48.625 00:12:48.625 CUnit - A unit testing framework for C - Version 2.1-3 00:12:48.625 http://cunit.sourceforge.net/ 00:12:48.625 00:12:48.625 00:12:48.625 Suite: bdevio tests on: nvme3n1 00:12:48.625 Test: blockdev write read block ...passed 00:12:48.625 Test: blockdev write zeroes read block ...passed 00:12:48.625 Test: blockdev write zeroes read no split ...passed 00:12:48.884 Test: blockdev write zeroes read split ...passed 00:12:48.884 Test: blockdev write zeroes read split partial ...passed 00:12:48.884 Test: blockdev reset ...passed 00:12:48.884 Test: blockdev write read 8 blocks ...passed 00:12:48.884 Test: blockdev write read size > 128k ...passed 00:12:48.884 Test: blockdev write read invalid size ...passed 00:12:48.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.884 Test: blockdev write read max offset ...passed 00:12:48.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.884 Test: blockdev writev readv 8 blocks ...passed 00:12:48.884 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.884 Test: blockdev writev readv block ...passed 00:12:48.884 Test: blockdev writev readv size > 128k ...passed 00:12:48.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.884 Test: blockdev comparev and writev ...passed 00:12:48.884 Test: blockdev nvme passthru rw ...passed 00:12:48.884 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.884 Test: blockdev nvme admin passthru ...passed 00:12:48.884 Test: blockdev copy ...passed 00:12:48.884 Suite: bdevio tests on: nvme2n3 00:12:48.884 Test: blockdev write read block ...passed 00:12:48.884 Test: blockdev write zeroes read block ...passed 00:12:48.884 Test: blockdev write zeroes read no split ...passed 00:12:48.884 Test: blockdev write zeroes read split ...passed 00:12:48.884 Test: blockdev write zeroes read split partial ...passed 00:12:48.884 Test: blockdev reset ...passed 00:12:48.884 Test: blockdev write read 8 blocks ...passed 00:12:48.884 Test: blockdev write read size > 128k ...passed 00:12:48.884 Test: blockdev write read invalid size ...passed 00:12:48.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.884 Test: blockdev write read max offset ...passed 00:12:48.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.884 Test: blockdev writev readv 8 blocks ...passed 00:12:48.884 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.884 Test: blockdev writev readv block ...passed 00:12:48.884 Test: blockdev writev readv size > 128k ...passed 00:12:48.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.884 Test: blockdev comparev and writev ...passed 00:12:48.884 Test: blockdev nvme passthru rw ...passed 00:12:48.884 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.884 Test: blockdev nvme admin passthru ...passed 00:12:48.884 Test: blockdev copy ...passed 00:12:48.884 Suite: bdevio tests on: nvme2n2 00:12:48.884 Test: blockdev write read block ...passed 00:12:48.884 Test: blockdev write zeroes read block ...passed 00:12:48.884 Test: blockdev write zeroes read no split ...passed 00:12:48.884 Test: blockdev write zeroes read split ...passed 00:12:48.884 Test: blockdev write zeroes read split partial ...passed 00:12:48.884 Test: blockdev reset ...passed 00:12:48.884 Test: blockdev write read 8 blocks ...passed 00:12:48.884 Test: blockdev write read size > 128k ...passed 00:12:48.884 Test: blockdev write read invalid size ...passed 00:12:48.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.884 Test: blockdev write read max offset ...passed 00:12:48.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.884 Test: blockdev writev readv 8 blocks ...passed 00:12:48.884 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.884 Test: blockdev writev readv block ...passed 00:12:48.884 Test: blockdev writev readv size > 128k ...passed 00:12:48.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.884 Test: blockdev comparev and writev ...passed 00:12:48.884 Test: blockdev nvme passthru rw ...passed 00:12:48.884 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.884 Test: blockdev nvme admin passthru ...passed 00:12:48.884 Test: blockdev copy ...passed 00:12:48.884 Suite: bdevio tests on: nvme2n1 00:12:48.884 Test: blockdev write read block ...passed 00:12:48.884 Test: blockdev write zeroes read block ...passed 00:12:48.884 Test: blockdev write zeroes read no split ...passed 00:12:48.884 Test: blockdev write zeroes read split ...passed 00:12:48.884 Test: blockdev write zeroes read split partial ...passed 00:12:48.884 Test: blockdev reset ...passed 00:12:48.884 Test: blockdev write read 8 blocks ...passed 00:12:48.884 Test: blockdev write read size > 128k ...passed 00:12:48.884 Test: blockdev write read invalid size ...passed 00:12:48.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:48.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:48.884 Test: blockdev write read max offset ...passed 00:12:48.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:48.884 Test: blockdev writev readv 8 blocks ...passed 00:12:48.884 Test: blockdev writev readv 30 x 1block ...passed 00:12:48.884 Test: blockdev writev readv block ...passed 00:12:48.884 Test: blockdev writev readv size > 128k ...passed 00:12:48.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:48.884 Test: blockdev comparev and writev ...passed 00:12:48.884 Test: blockdev nvme passthru rw ...passed 00:12:48.884 Test: blockdev nvme passthru vendor specific ...passed 00:12:48.884 Test: blockdev nvme admin passthru ...passed 00:12:48.884 Test: blockdev copy ...passed 00:12:48.884 Suite: bdevio tests on: nvme1n1 00:12:48.884 Test: blockdev write read block ...passed 00:12:48.884 Test: blockdev write zeroes read block ...passed 00:12:48.884 Test: blockdev write zeroes read no split ...passed 00:12:49.143 Test: blockdev write zeroes read split ...passed 00:12:49.143 Test: blockdev write zeroes read split partial ...passed 00:12:49.143 Test: blockdev reset ...passed 00:12:49.143 Test: blockdev write read 8 blocks ...passed 00:12:49.143 Test: blockdev write read size > 128k ...passed 00:12:49.143 Test: blockdev write read invalid size ...passed 00:12:49.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.143 Test: blockdev write read max offset ...passed 00:12:49.143 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.143 Test: blockdev writev readv 8 blocks ...passed 00:12:49.143 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.143 Test: blockdev writev readv block ...passed 00:12:49.143 Test: blockdev writev readv size > 128k ...passed 00:12:49.143 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.143 Test: blockdev comparev and writev ...passed 00:12:49.143 Test: blockdev nvme passthru rw ...passed 00:12:49.143 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.143 Test: blockdev nvme admin passthru ...passed 00:12:49.143 Test: blockdev copy ...passed 00:12:49.143 Suite: bdevio tests on: nvme0n1 00:12:49.143 Test: blockdev write read block ...passed 00:12:49.143 Test: blockdev write zeroes read block ...passed 00:12:49.143 Test: blockdev write zeroes read no split ...passed 00:12:49.143 Test: blockdev write zeroes read split ...passed 00:12:49.143 Test: blockdev write zeroes read split partial ...passed 00:12:49.143 Test: blockdev reset ...passed 00:12:49.143 Test: blockdev write read 8 blocks ...passed 00:12:49.143 Test: blockdev write read size > 128k ...passed 00:12:49.143 Test: blockdev write read invalid size ...passed 00:12:49.143 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:49.143 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:49.143 Test: blockdev write read max offset ...passed 00:12:49.143 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:49.143 Test: blockdev writev readv 8 blocks ...passed 00:12:49.143 Test: blockdev writev readv 30 x 1block ...passed 00:12:49.143 Test: blockdev writev readv block ...passed 00:12:49.143 Test: blockdev writev readv size > 128k ...passed 00:12:49.143 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:49.144 Test: blockdev comparev and writev ...passed 00:12:49.144 Test: blockdev nvme passthru rw ...passed 00:12:49.144 Test: blockdev nvme passthru vendor specific ...passed 00:12:49.144 Test: blockdev nvme admin passthru ...passed 00:12:49.144 Test: blockdev copy ...passed 00:12:49.144 00:12:49.144 Run Summary: Type Total Ran Passed Failed Inactive 00:12:49.144 suites 6 6 n/a 0 0 00:12:49.144 tests 138 138 138 0 0 00:12:49.144 asserts 780 780 780 0 n/a 00:12:49.144 00:12:49.144 Elapsed time = 1.009 seconds 00:12:49.144 0 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69853 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69853 ']' 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69853 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69853 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69853' 00:12:49.144 killing process with pid 69853 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69853 00:12:49.144 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69853 00:12:49.711 ************************************ 00:12:49.711 END TEST bdev_bounds 00:12:49.711 ************************************ 00:12:49.711 04:03:42 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:49.711 00:12:49.711 real 0m1.952s 00:12:49.711 user 0m4.975s 00:12:49.711 sys 0m0.261s 00:12:49.711 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.711 04:03:42 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:49.711 04:03:42 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:49.711 04:03:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:49.711 04:03:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.711 04:03:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.711 ************************************ 00:12:49.711 START TEST bdev_nbd 00:12:49.711 ************************************ 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:49.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69909 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69909 /var/tmp/spdk-nbd.sock 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 69909 ']' 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:49.711 04:03:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:49.969 [2024-10-13 04:03:42.877124] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:12:49.969 [2024-10-13 04:03:42.877238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.969 [2024-10-13 04:03:43.024279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.969 [2024-10-13 04:03:43.105178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.536 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.795 1+0 records in 00:12:50.795 1+0 records out 00:12:50.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589877 s, 6.9 MB/s 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:50.795 04:03:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:51.053 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.054 1+0 records in 00:12:51.054 1+0 records out 00:12:51.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464854 s, 8.8 MB/s 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.054 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.315 1+0 records in 00:12:51.315 1+0 records out 00:12:51.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295468 s, 13.9 MB/s 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.315 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.574 1+0 records in 00:12:51.574 1+0 records out 00:12:51.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004007 s, 10.2 MB/s 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.574 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.833 1+0 records in 00:12:51.833 1+0 records out 00:12:51.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328112 s, 12.5 MB/s 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:51.833 04:03:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.091 1+0 records in 00:12:52.091 1+0 records out 00:12:52.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401515 s, 10.2 MB/s 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:52.091 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd0", 00:12:52.352 "bdev_name": "nvme0n1" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd1", 00:12:52.352 "bdev_name": "nvme1n1" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd2", 00:12:52.352 "bdev_name": "nvme2n1" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd3", 00:12:52.352 "bdev_name": "nvme2n2" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd4", 00:12:52.352 "bdev_name": "nvme2n3" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd5", 00:12:52.352 "bdev_name": "nvme3n1" 00:12:52.352 } 00:12:52.352 ]' 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd0", 00:12:52.352 "bdev_name": "nvme0n1" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd1", 00:12:52.352 "bdev_name": "nvme1n1" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd2", 00:12:52.352 "bdev_name": "nvme2n1" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd3", 00:12:52.352 "bdev_name": "nvme2n2" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd4", 00:12:52.352 "bdev_name": "nvme2n3" 00:12:52.352 }, 00:12:52.352 { 00:12:52.352 "nbd_device": "/dev/nbd5", 00:12:52.352 "bdev_name": "nvme3n1" 00:12:52.352 } 00:12:52.352 ]' 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.352 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.612 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.870 04:03:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.129 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:53.390 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.650 04:03:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:53.908 /dev/nbd0 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.908 1+0 records in 00:12:53.908 1+0 records out 00:12:53.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034588 s, 11.8 MB/s 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:53.908 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:54.166 /dev/nbd1 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.166 1+0 records in 00:12:54.166 1+0 records out 00:12:54.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343635 s, 11.9 MB/s 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.166 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:54.425 /dev/nbd10 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.425 1+0 records in 00:12:54.425 1+0 records out 00:12:54.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471819 s, 8.7 MB/s 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.425 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.426 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:54.686 /dev/nbd11 00:12:54.686 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:54.686 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:54.686 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:54.686 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.686 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.686 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.686 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.687 1+0 records in 00:12:54.687 1+0 records out 00:12:54.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411288 s, 10.0 MB/s 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.687 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:54.947 /dev/nbd12 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:54.947 1+0 records in 00:12:54.947 1+0 records out 00:12:54.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412911 s, 9.9 MB/s 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:54.947 04:03:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:55.216 /dev/nbd13 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:55.216 1+0 records in 00:12:55.216 1+0 records out 00:12:55.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545767 s, 7.5 MB/s 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.216 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd0", 00:12:55.476 "bdev_name": "nvme0n1" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd1", 00:12:55.476 "bdev_name": "nvme1n1" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd10", 00:12:55.476 "bdev_name": "nvme2n1" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd11", 00:12:55.476 "bdev_name": "nvme2n2" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd12", 00:12:55.476 "bdev_name": "nvme2n3" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd13", 00:12:55.476 "bdev_name": "nvme3n1" 00:12:55.476 } 00:12:55.476 ]' 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd0", 00:12:55.476 "bdev_name": "nvme0n1" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd1", 00:12:55.476 "bdev_name": "nvme1n1" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd10", 00:12:55.476 "bdev_name": "nvme2n1" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd11", 00:12:55.476 "bdev_name": "nvme2n2" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd12", 00:12:55.476 "bdev_name": "nvme2n3" 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "nbd_device": "/dev/nbd13", 00:12:55.476 "bdev_name": "nvme3n1" 00:12:55.476 } 00:12:55.476 ]' 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:55.476 /dev/nbd1 00:12:55.476 /dev/nbd10 00:12:55.476 /dev/nbd11 00:12:55.476 /dev/nbd12 00:12:55.476 /dev/nbd13' 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:55.476 /dev/nbd1 00:12:55.476 /dev/nbd10 00:12:55.476 /dev/nbd11 00:12:55.476 /dev/nbd12 00:12:55.476 /dev/nbd13' 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:55.476 256+0 records in 00:12:55.476 256+0 records out 00:12:55.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0080599 s, 130 MB/s 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:55.476 256+0 records in 00:12:55.476 256+0 records out 00:12:55.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0529694 s, 19.8 MB/s 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:55.476 256+0 records in 00:12:55.476 256+0 records out 00:12:55.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0613954 s, 17.1 MB/s 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:55.476 256+0 records in 00:12:55.476 256+0 records out 00:12:55.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0502009 s, 20.9 MB/s 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.476 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:55.738 256+0 records in 00:12:55.738 256+0 records out 00:12:55.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0662608 s, 15.8 MB/s 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:55.738 256+0 records in 00:12:55.738 256+0 records out 00:12:55.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0511608 s, 20.5 MB/s 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:55.738 256+0 records in 00:12:55.738 256+0 records out 00:12:55.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.048418 s, 21.7 MB/s 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.738 04:03:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.999 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.259 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:56.518 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.779 04:03:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.065 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:57.324 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:57.584 malloc_lvol_verify 00:12:57.584 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:57.844 191c168d-06d5-4b67-8562-9ab75b4bebf5 00:12:57.844 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:57.844 9c13b51e-ef42-4a1d-b755-6e8dacf7e83d 00:12:57.844 04:03:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:58.103 /dev/nbd0 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:58.103 mke2fs 1.47.0 (5-Feb-2023) 00:12:58.103 Discarding device blocks: 0/4096 done 00:12:58.103 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:58.103 00:12:58.103 Allocating group tables: 0/1 done 00:12:58.103 Writing inode tables: 0/1 done 00:12:58.103 Creating journal (1024 blocks): done 00:12:58.103 Writing superblocks and filesystem accounting information: 0/1 done 00:12:58.103 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.103 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69909 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 69909 ']' 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 69909 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69909 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.363 killing process with pid 69909 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69909' 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 69909 00:12:58.363 04:03:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 69909 00:12:58.933 04:03:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:58.933 00:12:58.933 real 0m9.262s 00:12:58.933 user 0m13.312s 00:12:58.933 sys 0m3.080s 00:12:58.933 04:03:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.933 ************************************ 00:12:58.933 END TEST bdev_nbd 00:12:58.933 ************************************ 00:12:58.933 04:03:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:59.243 04:03:52 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:59.243 04:03:52 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:59.243 04:03:52 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:59.243 04:03:52 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:59.243 04:03:52 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.243 04:03:52 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.243 04:03:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.243 ************************************ 00:12:59.243 START TEST bdev_fio 00:12:59.243 ************************************ 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:12:59.243 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:59.243 ************************************ 00:12:59.243 START TEST bdev_fio_rw_verify 00:12:59.243 ************************************ 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:59.243 04:03:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:59.244 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:59.244 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:59.244 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:59.244 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:59.244 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:59.244 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:59.244 fio-3.35 00:12:59.244 Starting 6 threads 00:13:11.577 00:13:11.577 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70299: Sun Oct 13 04:04:03 2024 00:13:11.577 read: IOPS=18.2k, BW=71.1MiB/s (74.6MB/s)(712MiB/10002msec) 00:13:11.577 slat (usec): min=2, max=2710, avg= 5.52, stdev=15.79 00:13:11.577 clat (usec): min=73, max=8779, avg=1052.65, stdev=804.33 00:13:11.577 lat (usec): min=80, max=8792, avg=1058.17, stdev=804.99 00:13:11.577 clat percentiles (usec): 00:13:11.577 | 50.000th=[ 816], 99.000th=[ 3621], 99.900th=[ 5211], 99.990th=[ 6521], 00:13:11.577 | 99.999th=[ 8717] 00:13:11.577 write: IOPS=18.5k, BW=72.3MiB/s (75.9MB/s)(724MiB/10002msec); 0 zone resets 00:13:11.577 slat (usec): min=12, max=5588, avg=34.82, stdev=126.19 00:13:11.577 clat (usec): min=67, max=8964, avg=1268.63, stdev=896.97 00:13:11.577 lat (usec): min=80, max=8994, avg=1303.45, stdev=912.11 00:13:11.577 clat percentiles (usec): 00:13:11.577 | 50.000th=[ 1029], 99.000th=[ 4047], 99.900th=[ 5538], 99.990th=[ 6915], 00:13:11.577 | 99.999th=[ 8586] 00:13:11.577 bw ( KiB/s): min=45039, max=170241, per=100.00%, avg=75515.68, stdev=5821.95, samples=114 00:13:11.577 iops : min=11258, max=42558, avg=18878.16, stdev=1455.45, samples=114 00:13:11.577 lat (usec) : 100=0.04%, 250=6.12%, 500=18.97%, 750=17.04%, 1000=11.11% 00:13:11.577 lat (msec) : 2=30.75%, 4=15.14%, 10=0.83% 00:13:11.577 cpu : usr=43.67%, sys=31.28%, ctx=6078, majf=0, minf=17274 00:13:11.577 IO depths : 1=11.6%, 2=24.1%, 4=50.9%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.577 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.577 issued rwts: total=182168,185233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.577 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:11.577 00:13:11.577 Run status group 0 (all jobs): 00:13:11.577 READ: bw=71.1MiB/s (74.6MB/s), 71.1MiB/s-71.1MiB/s (74.6MB/s-74.6MB/s), io=712MiB (746MB), run=10002-10002msec 00:13:11.577 WRITE: bw=72.3MiB/s (75.9MB/s), 72.3MiB/s-72.3MiB/s (75.9MB/s-75.9MB/s), io=724MiB (759MB), run=10002-10002msec 00:13:11.577 ----------------------------------------------------- 00:13:11.577 Suppressions used: 00:13:11.577 count bytes template 00:13:11.577 6 48 /usr/src/fio/parse.c 00:13:11.577 2940 282240 /usr/src/fio/iolog.c 00:13:11.577 1 8 libtcmalloc_minimal.so 00:13:11.577 1 904 libcrypto.so 00:13:11.577 ----------------------------------------------------- 00:13:11.577 00:13:11.577 00:13:11.577 real 0m12.009s 00:13:11.577 user 0m27.727s 00:13:11.577 sys 0m19.126s 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:11.577 ************************************ 00:13:11.577 END TEST bdev_fio_rw_verify 00:13:11.577 ************************************ 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:11.577 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "731ec076-9da6-4114-bce3-39a872765f24"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "731ec076-9da6-4114-bce3-39a872765f24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9aa0ae21-7d9d-47a7-be56-eda076cf5762"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9aa0ae21-7d9d-47a7-be56-eda076cf5762",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1659be5a-fdb2-411d-bc52-b669e9be3dd1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1659be5a-fdb2-411d-bc52-b669e9be3dd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e4a02c47-d53d-4b76-a04d-cedb49b88e1c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e4a02c47-d53d-4b76-a04d-cedb49b88e1c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d07d8ce3-2e29-4fa3-8c67-65e2e743fd43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d07d8ce3-2e29-4fa3-8c67-65e2e743fd43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "93db02c5-d487-4559-8a39-14fc8b6967d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "93db02c5-d487-4559-8a39-14fc8b6967d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.578 /home/vagrant/spdk_repo/spdk 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:11.578 ************************************ 00:13:11.578 END TEST bdev_fio 00:13:11.578 ************************************ 00:13:11.578 00:13:11.578 real 0m12.178s 00:13:11.578 user 0m27.797s 00:13:11.578 sys 0m19.203s 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.578 04:04:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:11.578 04:04:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:11.578 04:04:04 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:11.578 04:04:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:11.578 04:04:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.578 04:04:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.578 ************************************ 00:13:11.578 START TEST bdev_verify 00:13:11.578 ************************************ 00:13:11.578 04:04:04 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:11.578 [2024-10-13 04:04:04.440306] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:11.578 [2024-10-13 04:04:04.440457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70472 ] 00:13:11.578 [2024-10-13 04:04:04.596943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:11.578 [2024-10-13 04:04:04.727056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.578 [2024-10-13 04:04:04.727193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.153 Running I/O for 5 seconds... 00:13:14.487 21056.00 IOPS, 82.25 MiB/s [2024-10-13T04:04:08.615Z] 21824.00 IOPS, 85.25 MiB/s [2024-10-13T04:04:09.559Z] 21813.33 IOPS, 85.21 MiB/s [2024-10-13T04:04:10.504Z] 21336.00 IOPS, 83.34 MiB/s [2024-10-13T04:04:10.504Z] 21433.20 IOPS, 83.72 MiB/s 00:13:17.344 Latency(us) 00:13:17.344 [2024-10-13T04:04:10.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.344 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x0 length 0xa0000 00:13:17.344 nvme0n1 : 5.04 1675.69 6.55 0.00 0.00 76245.30 8015.56 100824.62 00:13:17.344 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0xa0000 length 0xa0000 00:13:17.344 nvme0n1 : 5.06 1592.79 6.22 0.00 0.00 80218.90 14115.45 166158.97 00:13:17.344 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x0 length 0xbd0bd 00:13:17.344 nvme1n1 : 5.06 2242.86 8.76 0.00 0.00 56852.98 8368.44 61704.66 00:13:17.344 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:17.344 nvme1n1 : 5.04 2121.63 8.29 0.00 0.00 60081.46 5091.64 91952.05 00:13:17.344 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x0 length 0x80000 00:13:17.344 nvme2n1 : 5.06 1721.52 6.72 0.00 0.00 73874.61 11090.71 77433.30 00:13:17.344 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x80000 length 0x80000 00:13:17.344 nvme2n1 : 5.05 1673.82 6.54 0.00 0.00 75738.53 13308.85 82676.18 00:13:17.344 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x0 length 0x80000 00:13:17.344 nvme2n2 : 5.07 1718.19 6.71 0.00 0.00 73924.93 8217.21 71787.13 00:13:17.344 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x80000 length 0x80000 00:13:17.344 nvme2n2 : 5.07 1691.94 6.61 0.00 0.00 74714.81 3037.34 69770.63 00:13:17.344 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x0 length 0x80000 00:13:17.344 nvme2n3 : 5.05 1698.22 6.63 0.00 0.00 74641.61 6125.10 76626.71 00:13:17.344 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x80000 length 0x80000 00:13:17.344 nvme2n3 : 5.07 1666.13 6.51 0.00 0.00 75656.33 3856.54 75416.81 00:13:17.344 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x0 length 0x20000 00:13:17.344 nvme3n1 : 5.06 1694.20 6.62 0.00 0.00 74702.38 6099.89 79046.50 00:13:17.344 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:17.344 Verification LBA range: start 0x20000 length 0x20000 00:13:17.344 nvme3n1 : 5.08 1687.52 6.59 0.00 0.00 74452.67 3428.04 92355.35 00:13:17.344 [2024-10-13T04:04:10.504Z] =================================================================================================================== 00:13:17.344 [2024-10-13T04:04:10.504Z] Total : 21184.52 82.75 0.00 0.00 71891.81 3037.34 166158.97 00:13:18.291 00:13:18.291 real 0m6.725s 00:13:18.291 user 0m10.940s 00:13:18.291 sys 0m1.418s 00:13:18.291 04:04:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.291 ************************************ 00:13:18.291 END TEST bdev_verify 00:13:18.291 ************************************ 00:13:18.291 04:04:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:18.291 04:04:11 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:18.291 04:04:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:18.291 04:04:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.291 04:04:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.291 ************************************ 00:13:18.291 START TEST bdev_verify_big_io 00:13:18.291 ************************************ 00:13:18.291 04:04:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:18.291 [2024-10-13 04:04:11.228138] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:18.291 [2024-10-13 04:04:11.228292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70576 ] 00:13:18.291 [2024-10-13 04:04:11.382274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:18.552 [2024-10-13 04:04:11.515972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.552 [2024-10-13 04:04:11.516108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.124 Running I/O for 5 seconds... 00:13:25.017 1696.00 IOPS, 106.00 MiB/s [2024-10-13T04:04:18.177Z] 2445.50 IOPS, 152.84 MiB/s [2024-10-13T04:04:18.438Z] 2593.33 IOPS, 162.08 MiB/s 00:13:25.278 Latency(us) 00:13:25.278 [2024-10-13T04:04:18.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.278 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x0 length 0xa000 00:13:25.278 nvme0n1 : 6.10 60.31 3.77 0.00 0.00 1964357.21 189550.28 2490771.30 00:13:25.278 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0xa000 length 0xa000 00:13:25.278 nvme0n1 : 5.88 67.99 4.25 0.00 0.00 1791504.19 287148.50 3613554.22 00:13:25.278 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x0 length 0xbd0b 00:13:25.278 nvme1n1 : 6.12 125.45 7.84 0.00 0.00 952429.23 23895.43 1155046.79 00:13:25.278 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:25.278 nvme1n1 : 5.89 119.51 7.47 0.00 0.00 989567.82 64124.46 1910021.51 00:13:25.278 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x0 length 0x8000 00:13:25.278 nvme2n1 : 6.13 104.47 6.53 0.00 0.00 1095703.47 39321.60 1245385.65 00:13:25.278 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x8000 length 0x8000 00:13:25.278 nvme2n1 : 6.11 136.26 8.52 0.00 0.00 853500.24 121796.14 890483.00 00:13:25.278 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x0 length 0x8000 00:13:25.278 nvme2n2 : 6.15 85.85 5.37 0.00 0.00 1321947.62 24097.08 1380893.93 00:13:25.278 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x8000 length 0x8000 00:13:25.278 nvme2n2 : 6.11 140.75 8.80 0.00 0.00 803748.51 122602.73 845313.58 00:13:25.278 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x0 length 0x8000 00:13:25.278 nvme2n3 : 6.14 125.14 7.82 0.00 0.00 874170.03 22483.89 1367988.38 00:13:25.278 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x8000 length 0x8000 00:13:25.278 nvme2n3 : 6.11 164.91 10.31 0.00 0.00 654370.16 80256.39 877577.45 00:13:25.278 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:25.278 Verification LBA range: start 0x0 length 0x2000 00:13:25.278 nvme3n1 : 6.14 101.61 6.35 0.00 0.00 1039490.72 19156.68 2142321.43 00:13:25.279 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:25.279 Verification LBA range: start 0x2000 length 0x2000 00:13:25.279 nvme3n1 : 6.12 111.15 6.95 0.00 0.00 956859.35 4209.43 2761787.86 00:13:25.279 [2024-10-13T04:04:18.439Z] =================================================================================================================== 00:13:25.279 [2024-10-13T04:04:18.439Z] Total : 1343.41 83.96 0.00 0.00 1014481.09 4209.43 3613554.22 00:13:26.221 00:13:26.221 real 0m8.043s 00:13:26.221 user 0m14.662s 00:13:26.221 sys 0m0.492s 00:13:26.221 04:04:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.221 ************************************ 00:13:26.221 04:04:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.221 END TEST bdev_verify_big_io 00:13:26.221 ************************************ 00:13:26.221 04:04:19 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:26.221 04:04:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:26.221 04:04:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.221 04:04:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.221 ************************************ 00:13:26.221 START TEST bdev_write_zeroes 00:13:26.221 ************************************ 00:13:26.221 04:04:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:26.221 [2024-10-13 04:04:19.341207] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:26.221 [2024-10-13 04:04:19.341382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70687 ] 00:13:26.481 [2024-10-13 04:04:19.500127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.748 [2024-10-13 04:04:19.650228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.022 Running I/O for 1 seconds... 00:13:27.966 70912.00 IOPS, 277.00 MiB/s 00:13:27.966 Latency(us) 00:13:27.966 [2024-10-13T04:04:21.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.966 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.966 nvme0n1 : 1.02 11587.16 45.26 0.00 0.00 11035.64 6604.01 21374.82 00:13:27.966 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.966 nvme1n1 : 1.03 12576.95 49.13 0.00 0.00 10138.44 5873.03 20366.57 00:13:27.966 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.966 nvme2n1 : 1.03 11596.35 45.30 0.00 0.00 11007.47 6755.25 20669.05 00:13:27.966 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.966 nvme2n2 : 1.02 11523.60 45.01 0.00 0.00 11001.57 6175.51 20669.05 00:13:27.966 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.966 nvme2n3 : 1.02 11510.42 44.96 0.00 0.00 11006.60 6276.33 20971.52 00:13:27.966 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:27.966 nvme3n1 : 1.02 11497.37 44.91 0.00 0.00 11006.44 6125.10 21072.34 00:13:27.966 [2024-10-13T04:04:21.126Z] =================================================================================================================== 00:13:27.966 [2024-10-13T04:04:21.126Z] Total : 70291.85 274.58 0.00 0.00 10854.95 5873.03 21374.82 00:13:28.912 00:13:28.912 real 0m2.635s 00:13:28.912 user 0m1.971s 00:13:28.912 sys 0m0.491s 00:13:28.912 04:04:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.912 ************************************ 00:13:28.912 END TEST bdev_write_zeroes 00:13:28.912 ************************************ 00:13:28.912 04:04:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:28.912 04:04:21 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:28.912 04:04:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:28.912 04:04:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.912 04:04:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.912 ************************************ 00:13:28.912 START TEST bdev_json_nonenclosed 00:13:28.912 ************************************ 00:13:28.912 04:04:21 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:28.912 [2024-10-13 04:04:22.041551] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:28.912 [2024-10-13 04:04:22.041709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70730 ] 00:13:29.175 [2024-10-13 04:04:22.196883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.175 [2024-10-13 04:04:22.328941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.175 [2024-10-13 04:04:22.329047] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:29.175 [2024-10-13 04:04:22.329066] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:29.175 [2024-10-13 04:04:22.329077] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:29.436 00:13:29.436 real 0m0.555s 00:13:29.436 user 0m0.341s 00:13:29.436 sys 0m0.107s 00:13:29.436 ************************************ 00:13:29.436 04:04:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.436 04:04:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:29.436 END TEST bdev_json_nonenclosed 00:13:29.436 ************************************ 00:13:29.436 04:04:22 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.436 04:04:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:29.436 04:04:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.436 04:04:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.436 ************************************ 00:13:29.436 START TEST bdev_json_nonarray 00:13:29.436 ************************************ 00:13:29.436 04:04:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:29.698 [2024-10-13 04:04:22.658250] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:29.698 [2024-10-13 04:04:22.658402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70761 ] 00:13:29.698 [2024-10-13 04:04:22.812833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.959 [2024-10-13 04:04:22.942726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.959 [2024-10-13 04:04:22.942846] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:29.959 [2024-10-13 04:04:22.942865] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:29.959 [2024-10-13 04:04:22.942876] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:30.220 00:13:30.220 real 0m0.554s 00:13:30.220 user 0m0.342s 00:13:30.220 sys 0m0.105s 00:13:30.220 04:04:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.220 ************************************ 00:13:30.220 END TEST bdev_json_nonarray 00:13:30.220 ************************************ 00:13:30.220 04:04:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:30.220 04:04:23 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:30.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:37.383 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.673 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.673 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.673 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:42.673 00:13:42.673 real 1m4.005s 00:13:42.673 user 1m24.833s 00:13:42.673 sys 0m54.348s 00:13:42.673 04:04:35 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.673 ************************************ 00:13:42.673 END TEST blockdev_xnvme 00:13:42.673 ************************************ 00:13:42.673 04:04:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.673 04:04:35 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:42.673 04:04:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:42.673 04:04:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.673 04:04:35 -- common/autotest_common.sh@10 -- # set +x 00:13:42.673 ************************************ 00:13:42.673 START TEST ublk 00:13:42.673 ************************************ 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:42.673 * Looking for test storage... 00:13:42.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.673 04:04:35 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.673 04:04:35 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.673 04:04:35 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.673 04:04:35 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.673 04:04:35 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.673 04:04:35 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.673 04:04:35 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.673 04:04:35 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:42.673 04:04:35 ublk -- scripts/common.sh@345 -- # : 1 00:13:42.673 04:04:35 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.673 04:04:35 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.673 04:04:35 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:42.673 04:04:35 ublk -- scripts/common.sh@353 -- # local d=1 00:13:42.673 04:04:35 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.673 04:04:35 ublk -- scripts/common.sh@355 -- # echo 1 00:13:42.673 04:04:35 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.673 04:04:35 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@353 -- # local d=2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.673 04:04:35 ublk -- scripts/common.sh@355 -- # echo 2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.673 04:04:35 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.673 04:04:35 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.673 04:04:35 ublk -- scripts/common.sh@368 -- # return 0 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.673 --rc genhtml_branch_coverage=1 00:13:42.673 --rc genhtml_function_coverage=1 00:13:42.673 --rc genhtml_legend=1 00:13:42.673 --rc geninfo_all_blocks=1 00:13:42.673 --rc geninfo_unexecuted_blocks=1 00:13:42.673 00:13:42.673 ' 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.673 --rc genhtml_branch_coverage=1 00:13:42.673 --rc genhtml_function_coverage=1 00:13:42.673 --rc genhtml_legend=1 00:13:42.673 --rc geninfo_all_blocks=1 00:13:42.673 --rc geninfo_unexecuted_blocks=1 00:13:42.673 00:13:42.673 ' 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.673 --rc genhtml_branch_coverage=1 00:13:42.673 --rc genhtml_function_coverage=1 00:13:42.673 --rc genhtml_legend=1 00:13:42.673 --rc geninfo_all_blocks=1 00:13:42.673 --rc geninfo_unexecuted_blocks=1 00:13:42.673 00:13:42.673 ' 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:42.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.673 --rc genhtml_branch_coverage=1 00:13:42.673 --rc genhtml_function_coverage=1 00:13:42.673 --rc genhtml_legend=1 00:13:42.673 --rc geninfo_all_blocks=1 00:13:42.673 --rc geninfo_unexecuted_blocks=1 00:13:42.673 00:13:42.673 ' 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:42.673 04:04:35 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:42.673 04:04:35 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:42.673 04:04:35 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:42.673 04:04:35 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:42.673 04:04:35 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:42.673 04:04:35 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:42.673 04:04:35 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:42.673 04:04:35 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:42.673 04:04:35 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.673 04:04:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.673 ************************************ 00:13:42.673 START TEST test_save_ublk_config 00:13:42.673 ************************************ 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71074 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71074 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71074 ']' 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.673 04:04:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:42.673 [2024-10-13 04:04:35.758225] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:42.673 [2024-10-13 04:04:35.758386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71074 ] 00:13:42.934 [2024-10-13 04:04:35.910555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.934 [2024-10-13 04:04:36.045682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.503 04:04:36 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.503 04:04:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:43.503 04:04:36 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:43.503 04:04:36 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:43.503 04:04:36 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.503 04:04:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:43.767 [2024-10-13 04:04:36.669639] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:43.767 [2024-10-13 04:04:36.670527] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:43.767 malloc0 00:13:43.767 [2024-10-13 04:04:36.734058] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:43.767 [2024-10-13 04:04:36.734166] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:43.767 [2024-10-13 04:04:36.734177] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:43.767 [2024-10-13 04:04:36.734185] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:43.767 [2024-10-13 04:04:36.742753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:43.767 [2024-10-13 04:04:36.742787] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:43.767 [2024-10-13 04:04:36.749688] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:43.767 [2024-10-13 04:04:36.749861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:43.767 [2024-10-13 04:04:36.766646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:43.767 0 00:13:43.767 04:04:36 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.767 04:04:36 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:43.767 04:04:36 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.767 04:04:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:44.027 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.027 04:04:37 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:44.027 "subsystems": [ 00:13:44.027 { 00:13:44.028 "subsystem": "fsdev", 00:13:44.028 "config": [ 00:13:44.028 { 00:13:44.028 "method": "fsdev_set_opts", 00:13:44.028 "params": { 00:13:44.028 "fsdev_io_pool_size": 65535, 00:13:44.028 "fsdev_io_cache_size": 256 00:13:44.028 } 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "keyring", 00:13:44.028 "config": [] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "iobuf", 00:13:44.028 "config": [ 00:13:44.028 { 00:13:44.028 "method": "iobuf_set_options", 00:13:44.028 "params": { 00:13:44.028 "small_pool_count": 8192, 00:13:44.028 "large_pool_count": 1024, 00:13:44.028 "small_bufsize": 8192, 00:13:44.028 "large_bufsize": 135168 00:13:44.028 } 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "sock", 00:13:44.028 "config": [ 00:13:44.028 { 00:13:44.028 "method": "sock_set_default_impl", 00:13:44.028 "params": { 00:13:44.028 "impl_name": "posix" 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "sock_impl_set_options", 00:13:44.028 "params": { 00:13:44.028 "impl_name": "ssl", 00:13:44.028 "recv_buf_size": 4096, 00:13:44.028 "send_buf_size": 4096, 00:13:44.028 "enable_recv_pipe": true, 00:13:44.028 "enable_quickack": false, 00:13:44.028 "enable_placement_id": 0, 00:13:44.028 "enable_zerocopy_send_server": true, 00:13:44.028 "enable_zerocopy_send_client": false, 00:13:44.028 "zerocopy_threshold": 0, 00:13:44.028 "tls_version": 0, 00:13:44.028 "enable_ktls": false 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "sock_impl_set_options", 00:13:44.028 "params": { 00:13:44.028 "impl_name": "posix", 00:13:44.028 "recv_buf_size": 2097152, 00:13:44.028 "send_buf_size": 2097152, 00:13:44.028 "enable_recv_pipe": true, 00:13:44.028 "enable_quickack": false, 00:13:44.028 "enable_placement_id": 0, 00:13:44.028 "enable_zerocopy_send_server": true, 00:13:44.028 "enable_zerocopy_send_client": false, 00:13:44.028 "zerocopy_threshold": 0, 00:13:44.028 "tls_version": 0, 00:13:44.028 "enable_ktls": false 00:13:44.028 } 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "vmd", 00:13:44.028 "config": [] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "accel", 00:13:44.028 "config": [ 00:13:44.028 { 00:13:44.028 "method": "accel_set_options", 00:13:44.028 "params": { 00:13:44.028 "small_cache_size": 128, 00:13:44.028 "large_cache_size": 16, 00:13:44.028 "task_count": 2048, 00:13:44.028 "sequence_count": 2048, 00:13:44.028 "buf_count": 2048 00:13:44.028 } 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "bdev", 00:13:44.028 "config": [ 00:13:44.028 { 00:13:44.028 "method": "bdev_set_options", 00:13:44.028 "params": { 00:13:44.028 "bdev_io_pool_size": 65535, 00:13:44.028 "bdev_io_cache_size": 256, 00:13:44.028 "bdev_auto_examine": true, 00:13:44.028 "iobuf_small_cache_size": 128, 00:13:44.028 "iobuf_large_cache_size": 16 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "bdev_raid_set_options", 00:13:44.028 "params": { 00:13:44.028 "process_window_size_kb": 1024, 00:13:44.028 "process_max_bandwidth_mb_sec": 0 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "bdev_iscsi_set_options", 00:13:44.028 "params": { 00:13:44.028 "timeout_sec": 30 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "bdev_nvme_set_options", 00:13:44.028 "params": { 00:13:44.028 "action_on_timeout": "none", 00:13:44.028 "timeout_us": 0, 00:13:44.028 "timeout_admin_us": 0, 00:13:44.028 "keep_alive_timeout_ms": 10000, 00:13:44.028 "arbitration_burst": 0, 00:13:44.028 "low_priority_weight": 0, 00:13:44.028 "medium_priority_weight": 0, 00:13:44.028 "high_priority_weight": 0, 00:13:44.028 "nvme_adminq_poll_period_us": 10000, 00:13:44.028 "nvme_ioq_poll_period_us": 0, 00:13:44.028 "io_queue_requests": 0, 00:13:44.028 "delay_cmd_submit": true, 00:13:44.028 "transport_retry_count": 4, 00:13:44.028 "bdev_retry_count": 3, 00:13:44.028 "transport_ack_timeout": 0, 00:13:44.028 "ctrlr_loss_timeout_sec": 0, 00:13:44.028 "reconnect_delay_sec": 0, 00:13:44.028 "fast_io_fail_timeout_sec": 0, 00:13:44.028 "disable_auto_failback": false, 00:13:44.028 "generate_uuids": false, 00:13:44.028 "transport_tos": 0, 00:13:44.028 "nvme_error_stat": false, 00:13:44.028 "rdma_srq_size": 0, 00:13:44.028 "io_path_stat": false, 00:13:44.028 "allow_accel_sequence": false, 00:13:44.028 "rdma_max_cq_size": 0, 00:13:44.028 "rdma_cm_event_timeout_ms": 0, 00:13:44.028 "dhchap_digests": [ 00:13:44.028 "sha256", 00:13:44.028 "sha384", 00:13:44.028 "sha512" 00:13:44.028 ], 00:13:44.028 "dhchap_dhgroups": [ 00:13:44.028 "null", 00:13:44.028 "ffdhe2048", 00:13:44.028 "ffdhe3072", 00:13:44.028 "ffdhe4096", 00:13:44.028 "ffdhe6144", 00:13:44.028 "ffdhe8192" 00:13:44.028 ] 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "bdev_nvme_set_hotplug", 00:13:44.028 "params": { 00:13:44.028 "period_us": 100000, 00:13:44.028 "enable": false 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "bdev_malloc_create", 00:13:44.028 "params": { 00:13:44.028 "name": "malloc0", 00:13:44.028 "num_blocks": 8192, 00:13:44.028 "block_size": 4096, 00:13:44.028 "physical_block_size": 4096, 00:13:44.028 "uuid": "051cfc7d-a278-41e3-85d9-cf58cf718139", 00:13:44.028 "optimal_io_boundary": 0, 00:13:44.028 "md_size": 0, 00:13:44.028 "dif_type": 0, 00:13:44.028 "dif_is_head_of_md": false, 00:13:44.028 "dif_pi_format": 0 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "bdev_wait_for_examine" 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "scsi", 00:13:44.028 "config": null 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "scheduler", 00:13:44.028 "config": [ 00:13:44.028 { 00:13:44.028 "method": "framework_set_scheduler", 00:13:44.028 "params": { 00:13:44.028 "name": "static" 00:13:44.028 } 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "vhost_scsi", 00:13:44.028 "config": [] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "vhost_blk", 00:13:44.028 "config": [] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "ublk", 00:13:44.028 "config": [ 00:13:44.028 { 00:13:44.028 "method": "ublk_create_target", 00:13:44.028 "params": { 00:13:44.028 "cpumask": "1" 00:13:44.028 } 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "method": "ublk_start_disk", 00:13:44.028 "params": { 00:13:44.028 "bdev_name": "malloc0", 00:13:44.028 "ublk_id": 0, 00:13:44.028 "num_queues": 1, 00:13:44.028 "queue_depth": 128 00:13:44.028 } 00:13:44.028 } 00:13:44.028 ] 00:13:44.028 }, 00:13:44.028 { 00:13:44.028 "subsystem": "nbd", 00:13:44.028 "config": [] 00:13:44.029 }, 00:13:44.029 { 00:13:44.029 "subsystem": "nvmf", 00:13:44.029 "config": [ 00:13:44.029 { 00:13:44.029 "method": "nvmf_set_config", 00:13:44.029 "params": { 00:13:44.029 "discovery_filter": "match_any", 00:13:44.029 "admin_cmd_passthru": { 00:13:44.029 "identify_ctrlr": false 00:13:44.029 }, 00:13:44.029 "dhchap_digests": [ 00:13:44.029 "sha256", 00:13:44.029 "sha384", 00:13:44.029 "sha512" 00:13:44.029 ], 00:13:44.029 "dhchap_dhgroups": [ 00:13:44.029 "null", 00:13:44.029 "ffdhe2048", 00:13:44.029 "ffdhe3072", 00:13:44.029 "ffdhe4096", 00:13:44.029 "ffdhe6144", 00:13:44.029 "ffdhe8192" 00:13:44.029 ] 00:13:44.029 } 00:13:44.029 }, 00:13:44.029 { 00:13:44.029 "method": "nvmf_set_max_subsystems", 00:13:44.029 "params": { 00:13:44.029 "max_subsystems": 1024 00:13:44.029 } 00:13:44.029 }, 00:13:44.029 { 00:13:44.029 "method": "nvmf_set_crdt", 00:13:44.029 "params": { 00:13:44.029 "crdt1": 0, 00:13:44.029 "crdt2": 0, 00:13:44.029 "crdt3": 0 00:13:44.029 } 00:13:44.029 } 00:13:44.029 ] 00:13:44.029 }, 00:13:44.029 { 00:13:44.029 "subsystem": "iscsi", 00:13:44.029 "config": [ 00:13:44.029 { 00:13:44.029 "method": "iscsi_set_options", 00:13:44.029 "params": { 00:13:44.029 "node_base": "iqn.2016-06.io.spdk", 00:13:44.029 "max_sessions": 128, 00:13:44.029 "max_connections_per_session": 2, 00:13:44.029 "max_queue_depth": 64, 00:13:44.029 "default_time2wait": 2, 00:13:44.029 "default_time2retain": 20, 00:13:44.029 "first_burst_length": 8192, 00:13:44.029 "immediate_data": true, 00:13:44.029 "allow_duplicated_isid": false, 00:13:44.029 "error_recovery_level": 0, 00:13:44.029 "nop_timeout": 60, 00:13:44.029 "nop_in_interval": 30, 00:13:44.029 "disable_chap": false, 00:13:44.029 "require_chap": false, 00:13:44.029 "mutual_chap": false, 00:13:44.029 "chap_group": 0, 00:13:44.029 "max_large_datain_per_connection": 64, 00:13:44.029 "max_r2t_per_connection": 4, 00:13:44.029 "pdu_pool_size": 36864, 00:13:44.029 "immediate_data_pool_size": 16384, 00:13:44.029 "data_out_pool_size": 2048 00:13:44.029 } 00:13:44.029 } 00:13:44.029 ] 00:13:44.029 } 00:13:44.029 ] 00:13:44.029 }' 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71074 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71074 ']' 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71074 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71074 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.029 killing process with pid 71074 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71074' 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71074 00:13:44.029 04:04:37 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71074 00:13:45.416 [2024-10-13 04:04:38.199861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:45.416 [2024-10-13 04:04:38.233682] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:45.416 [2024-10-13 04:04:38.233879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:45.416 [2024-10-13 04:04:38.237958] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:45.416 [2024-10-13 04:04:38.238047] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:45.416 [2024-10-13 04:04:38.238069] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:45.416 [2024-10-13 04:04:38.238110] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:45.416 [2024-10-13 04:04:38.238328] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71129 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71129 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71129 ']' 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:46.802 04:04:39 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:46.802 "subsystems": [ 00:13:46.802 { 00:13:46.802 "subsystem": "fsdev", 00:13:46.802 "config": [ 00:13:46.802 { 00:13:46.802 "method": "fsdev_set_opts", 00:13:46.802 "params": { 00:13:46.802 "fsdev_io_pool_size": 65535, 00:13:46.802 "fsdev_io_cache_size": 256 00:13:46.802 } 00:13:46.802 } 00:13:46.802 ] 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "subsystem": "keyring", 00:13:46.802 "config": [] 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "subsystem": "iobuf", 00:13:46.802 "config": [ 00:13:46.802 { 00:13:46.802 "method": "iobuf_set_options", 00:13:46.802 "params": { 00:13:46.802 "small_pool_count": 8192, 00:13:46.802 "large_pool_count": 1024, 00:13:46.802 "small_bufsize": 8192, 00:13:46.802 "large_bufsize": 135168 00:13:46.802 } 00:13:46.802 } 00:13:46.802 ] 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "subsystem": "sock", 00:13:46.802 "config": [ 00:13:46.802 { 00:13:46.802 "method": "sock_set_default_impl", 00:13:46.802 "params": { 00:13:46.802 "impl_name": "posix" 00:13:46.802 } 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "method": "sock_impl_set_options", 00:13:46.802 "params": { 00:13:46.802 "impl_name": "ssl", 00:13:46.802 "recv_buf_size": 4096, 00:13:46.802 "send_buf_size": 4096, 00:13:46.802 "enable_recv_pipe": true, 00:13:46.802 "enable_quickack": false, 00:13:46.802 "enable_placement_id": 0, 00:13:46.802 "enable_zerocopy_send_server": true, 00:13:46.802 "enable_zerocopy_send_client": false, 00:13:46.802 "zerocopy_threshold": 0, 00:13:46.802 "tls_version": 0, 00:13:46.802 "enable_ktls": false 00:13:46.802 } 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "method": "sock_impl_set_options", 00:13:46.802 "params": { 00:13:46.802 "impl_name": "posix", 00:13:46.802 "recv_buf_size": 2097152, 00:13:46.802 "send_buf_size": 2097152, 00:13:46.802 "enable_recv_pipe": true, 00:13:46.802 "enable_quickack": false, 00:13:46.802 "enable_placement_id": 0, 00:13:46.802 "enable_zerocopy_send_server": true, 00:13:46.802 "enable_zerocopy_send_client": false, 00:13:46.802 "zerocopy_threshold": 0, 00:13:46.802 "tls_version": 0, 00:13:46.802 "enable_ktls": false 00:13:46.802 } 00:13:46.802 } 00:13:46.802 ] 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "subsystem": "vmd", 00:13:46.802 "config": [] 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "subsystem": "accel", 00:13:46.802 "config": [ 00:13:46.802 { 00:13:46.802 "method": "accel_set_options", 00:13:46.802 "params": { 00:13:46.802 "small_cache_size": 128, 00:13:46.802 "large_cache_size": 16, 00:13:46.802 "task_count": 2048, 00:13:46.802 "sequence_count": 2048, 00:13:46.802 "buf_count": 2048 00:13:46.802 } 00:13:46.802 } 00:13:46.802 ] 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "subsystem": "bdev", 00:13:46.802 "config": [ 00:13:46.802 { 00:13:46.802 "method": "bdev_set_options", 00:13:46.802 "params": { 00:13:46.802 "bdev_io_pool_size": 65535, 00:13:46.802 "bdev_io_cache_size": 256, 00:13:46.802 "bdev_auto_examine": true, 00:13:46.802 "iobuf_small_cache_size": 128, 00:13:46.802 "iobuf_large_cache_size": 16 00:13:46.802 } 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "method": "bdev_raid_set_options", 00:13:46.802 "params": { 00:13:46.802 "process_window_size_kb": 1024, 00:13:46.802 "process_max_bandwidth_mb_sec": 0 00:13:46.802 } 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "method": "bdev_iscsi_set_options", 00:13:46.802 "params": { 00:13:46.802 "timeout_sec": 30 00:13:46.802 } 00:13:46.802 }, 00:13:46.802 { 00:13:46.802 "method": "bdev_nvme_set_options", 00:13:46.802 "params": { 00:13:46.802 "action_on_timeout": "none", 00:13:46.802 "timeout_us": 0, 00:13:46.802 "timeout_admin_us": 0, 00:13:46.803 "keep_alive_timeout_ms": 10000, 00:13:46.803 "arbitration_burst": 0, 00:13:46.803 "low_priority_weight": 0, 00:13:46.803 "medium_priority_weight": 0, 00:13:46.803 "high_priority_weight": 0, 00:13:46.803 "nvme_adminq_poll_period_us": 10000, 00:13:46.803 "nvme_ioq_poll_period_us": 0, 00:13:46.803 "io_queue_requests": 0, 00:13:46.803 "delay_cmd_submit": true, 00:13:46.803 "transport_retry_count": 4, 00:13:46.803 "bdev_retry_count": 3, 00:13:46.803 "transport_ack_timeout": 0, 00:13:46.803 "ctrlr_loss_timeout_sec": 0, 00:13:46.803 "reconnect_delay_sec": 0, 00:13:46.803 "fast_io_fail_timeout_sec": 0, 00:13:46.803 "disable_auto_failback": false, 00:13:46.803 "generate_uuids": false, 00:13:46.803 "transport_tos": 0, 00:13:46.803 "nvme_error_stat": false, 00:13:46.803 "rdma_srq_size": 0, 00:13:46.803 "io_path_stat": false, 00:13:46.803 "allow_accel_sequence": false, 00:13:46.803 "rdma_max_cq_size": 0, 00:13:46.803 "rdma_cm_event_timeout_ms": 0, 00:13:46.803 "dhchap_digests": [ 00:13:46.803 "sha256", 00:13:46.803 "sha384", 00:13:46.803 "sha512" 00:13:46.803 ], 00:13:46.803 "dhchap_dhgroups": [ 00:13:46.803 "null", 00:13:46.803 "ffdhe2048", 00:13:46.803 "ffdhe3072", 00:13:46.803 "ffdhe4096", 00:13:46.803 "ffdhe6144", 00:13:46.803 "ffdhe8192" 00:13:46.803 ] 00:13:46.803 } 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "method": "bdev_nvme_set_hotplug", 00:13:46.803 "params": { 00:13:46.803 "period_us": 100000, 00:13:46.803 "enable": false 00:13:46.803 } 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "method": "bdev_malloc_create", 00:13:46.803 "params": { 00:13:46.803 "name": "malloc0", 00:13:46.803 "num_blocks": 8192, 00:13:46.803 "block_size": 4096, 00:13:46.803 "physical_block_size": 4096, 00:13:46.803 "uuid": "051cfc7d-a278-41e3-85d9-cf58cf718139", 00:13:46.803 "optimal_io_boundary": 0, 00:13:46.803 "md_size": 0, 00:13:46.803 "dif_type": 0, 00:13:46.803 "dif_is_head_of_md": false, 00:13:46.803 "dif_pi_format": 0 00:13:46.803 } 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "method": "bdev_wait_for_examine" 00:13:46.803 } 00:13:46.803 ] 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "scsi", 00:13:46.803 "config": null 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "scheduler", 00:13:46.803 "config": [ 00:13:46.803 { 00:13:46.803 "method": "framework_set_scheduler", 00:13:46.803 "params": { 00:13:46.803 "name": "static" 00:13:46.803 } 00:13:46.803 } 00:13:46.803 ] 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "vhost_scsi", 00:13:46.803 "config": [] 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "vhost_blk", 00:13:46.803 "config": [] 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "ublk", 00:13:46.803 "config": [ 00:13:46.803 { 00:13:46.803 "method": "ublk_create_target", 00:13:46.803 "params": { 00:13:46.803 "cpumask": "1" 00:13:46.803 } 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "method": "ublk_start_disk", 00:13:46.803 "params": { 00:13:46.803 "bdev_name": "malloc0", 00:13:46.803 "ublk_id": 0, 00:13:46.803 "num_queues": 1, 00:13:46.803 "queue_depth": 128 00:13:46.803 } 00:13:46.803 } 00:13:46.803 ] 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "nbd", 00:13:46.803 "config": [] 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "nvmf", 00:13:46.803 "config": [ 00:13:46.803 { 00:13:46.803 "method": "nvmf_set_config", 00:13:46.803 "params": { 00:13:46.803 "discovery_filter": "match_any", 00:13:46.803 "admin_cmd_passthru": { 00:13:46.803 "identify_ctrlr": false 00:13:46.803 }, 00:13:46.803 "dhchap_digests": [ 00:13:46.803 "sha256", 00:13:46.803 "sha384", 00:13:46.803 "sha512" 00:13:46.803 ], 00:13:46.803 "dhchap_dhgroups": [ 00:13:46.803 "null", 00:13:46.803 "ffdhe2048", 00:13:46.803 "ffdhe3072", 00:13:46.803 "ffdhe4096", 00:13:46.803 "ffdhe6144", 00:13:46.803 "ffdhe8192" 00:13:46.803 ] 00:13:46.803 } 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "method": "nvmf_set_max_subsystems", 00:13:46.803 "params": { 00:13:46.803 "max_subsystems": 1024 00:13:46.803 } 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "method": "nvmf_set_crdt", 00:13:46.803 "params": { 00:13:46.803 "crdt1": 0, 00:13:46.803 "crdt2": 0, 00:13:46.803 "crdt3": 0 00:13:46.803 } 00:13:46.803 } 00:13:46.803 ] 00:13:46.803 }, 00:13:46.803 { 00:13:46.803 "subsystem": "iscsi", 00:13:46.803 "config": [ 00:13:46.803 { 00:13:46.803 "method": "iscsi_set_options", 00:13:46.803 "params": { 00:13:46.803 "node_base": "iqn.2016-06.io.spdk", 00:13:46.803 "max_sessions": 128, 00:13:46.803 "max_connections_per_session": 2, 00:13:46.803 "max_queue_depth": 64, 00:13:46.803 "default_time2wait": 2, 00:13:46.803 "default_time2retain": 20, 00:13:46.803 "first_burst_length": 8192, 00:13:46.803 "immediate_data": true, 00:13:46.803 "allow_duplicated_isid": false, 00:13:46.803 "error_recovery_level": 0, 00:13:46.803 "nop_timeout": 60, 00:13:46.803 "nop_in_interval": 30, 00:13:46.803 "disable_chap": false, 00:13:46.803 "require_chap": false, 00:13:46.803 "mutual_chap": false, 00:13:46.803 "chap_group": 0, 00:13:46.803 "max_large_datain_per_connection": 64, 00:13:46.803 "max_r2t_per_connection": 4, 00:13:46.803 "pdu_pool_size": 36864, 00:13:46.803 "immediate_data_pool_size": 16384, 00:13:46.803 "data_out_pool_size": 2048 00:13:46.803 } 00:13:46.803 } 00:13:46.803 ] 00:13:46.803 } 00:13:46.803 ] 00:13:46.803 }' 00:13:46.803 04:04:39 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:46.803 [2024-10-13 04:04:39.712287] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:46.803 [2024-10-13 04:04:39.712410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71129 ] 00:13:46.803 [2024-10-13 04:04:39.858077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.803 [2024-10-13 04:04:39.954373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.745 [2024-10-13 04:04:40.707633] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:47.745 [2024-10-13 04:04:40.708470] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:47.745 [2024-10-13 04:04:40.715736] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:47.745 [2024-10-13 04:04:40.715807] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:47.745 [2024-10-13 04:04:40.715815] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:47.745 [2024-10-13 04:04:40.715833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:47.745 [2024-10-13 04:04:40.724695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:47.745 [2024-10-13 04:04:40.724714] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:47.745 [2024-10-13 04:04:40.731654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:47.745 [2024-10-13 04:04:40.731744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:47.745 [2024-10-13 04:04:40.748634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71129 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71129 ']' 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71129 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71129 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.745 killing process with pid 71129 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71129' 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71129 00:13:47.745 04:04:40 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71129 00:13:49.130 [2024-10-13 04:04:41.989003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:49.130 [2024-10-13 04:04:42.020653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:49.130 [2024-10-13 04:04:42.020774] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:49.130 [2024-10-13 04:04:42.028644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:49.130 [2024-10-13 04:04:42.028692] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:49.130 [2024-10-13 04:04:42.028699] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:49.130 [2024-10-13 04:04:42.028724] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:49.130 [2024-10-13 04:04:42.028856] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:50.514 04:04:43 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:50.514 00:13:50.514 real 0m7.653s 00:13:50.514 user 0m5.530s 00:13:50.514 sys 0m2.786s 00:13:50.514 04:04:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.514 04:04:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:50.514 ************************************ 00:13:50.514 END TEST test_save_ublk_config 00:13:50.514 ************************************ 00:13:50.514 04:04:43 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71202 00:13:50.514 04:04:43 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.514 04:04:43 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71202 00:13:50.514 04:04:43 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:50.514 04:04:43 ublk -- common/autotest_common.sh@831 -- # '[' -z 71202 ']' 00:13:50.514 04:04:43 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.514 04:04:43 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.514 04:04:43 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.514 04:04:43 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.514 04:04:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:50.514 [2024-10-13 04:04:43.443942] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:13:50.514 [2024-10-13 04:04:43.444069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71202 ] 00:13:50.514 [2024-10-13 04:04:43.594066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.775 [2024-10-13 04:04:43.721878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.775 [2024-10-13 04:04:43.721963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.347 04:04:44 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.347 04:04:44 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:51.347 04:04:44 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:51.347 04:04:44 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:51.347 04:04:44 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.347 04:04:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.347 ************************************ 00:13:51.347 START TEST test_create_ublk 00:13:51.347 ************************************ 00:13:51.347 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:51.347 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:51.347 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.347 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.347 [2024-10-13 04:04:44.429644] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:51.347 [2024-10-13 04:04:44.431957] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:51.347 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.347 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:51.347 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:51.347 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.347 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.608 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:51.608 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.608 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.608 [2024-10-13 04:04:44.661814] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:51.608 [2024-10-13 04:04:44.662272] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:51.608 [2024-10-13 04:04:44.662293] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:51.608 [2024-10-13 04:04:44.662302] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:51.608 [2024-10-13 04:04:44.670957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:51.608 [2024-10-13 04:04:44.670990] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:51.608 [2024-10-13 04:04:44.677655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:51.608 [2024-10-13 04:04:44.688718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:51.608 [2024-10-13 04:04:44.699753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:51.608 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:51.608 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.608 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:51.608 04:04:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:51.608 { 00:13:51.608 "ublk_device": "/dev/ublkb0", 00:13:51.608 "id": 0, 00:13:51.608 "queue_depth": 512, 00:13:51.608 "num_queues": 4, 00:13:51.608 "bdev_name": "Malloc0" 00:13:51.608 } 00:13:51.608 ]' 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:51.608 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:51.888 04:04:44 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:51.888 04:04:44 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:51.888 fio: verification read phase will never start because write phase uses all of runtime 00:13:51.888 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:51.888 fio-3.35 00:13:51.888 Starting 1 process 00:14:04.156 00:14:04.156 fio_test: (groupid=0, jobs=1): err= 0: pid=71247: Sun Oct 13 04:04:55 2024 00:14:04.156 write: IOPS=18.7k, BW=72.9MiB/s (76.5MB/s)(729MiB/10001msec); 0 zone resets 00:14:04.156 clat (usec): min=33, max=3966, avg=52.73, stdev=84.49 00:14:04.156 lat (usec): min=33, max=3967, avg=53.20, stdev=84.51 00:14:04.156 clat percentiles (usec): 00:14:04.156 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 44], 00:14:04.156 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 50], 00:14:04.156 | 70.00th=[ 52], 80.00th=[ 56], 90.00th=[ 61], 95.00th=[ 67], 00:14:04.156 | 99.00th=[ 82], 99.50th=[ 97], 99.90th=[ 1401], 99.95th=[ 2507], 00:14:04.156 | 99.99th=[ 3556] 00:14:04.156 bw ( KiB/s): min=58120, max=84528, per=99.90%, avg=74621.05, stdev=8080.67, samples=19 00:14:04.156 iops : min=14530, max=21132, avg=18655.26, stdev=2020.17, samples=19 00:14:04.156 lat (usec) : 50=61.09%, 100=38.43%, 250=0.29%, 500=0.06%, 750=0.01% 00:14:04.156 lat (usec) : 1000=0.01% 00:14:04.156 lat (msec) : 2=0.04%, 4=0.07% 00:14:04.156 cpu : usr=3.31%, sys=14.98%, ctx=186734, majf=0, minf=796 00:14:04.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.156 issued rwts: total=0,186749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.156 00:14:04.156 Run status group 0 (all jobs): 00:14:04.156 WRITE: bw=72.9MiB/s (76.5MB/s), 72.9MiB/s-72.9MiB/s (76.5MB/s-76.5MB/s), io=729MiB (765MB), run=10001-10001msec 00:14:04.156 00:14:04.156 Disk stats (read/write): 00:14:04.156 ublkb0: ios=0/184718, merge=0/0, ticks=0/8157, in_queue=8157, util=99.08% 00:14:04.156 04:04:55 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 [2024-10-13 04:04:55.122078] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.156 [2024-10-13 04:04:55.156666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.156 [2024-10-13 04:04:55.157263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.156 [2024-10-13 04:04:55.164639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.156 [2024-10-13 04:04:55.164870] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:04.156 [2024-10-13 04:04:55.164884] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 [2024-10-13 04:04:55.180689] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:04.156 request: 00:14:04.156 { 00:14:04.156 "ublk_id": 0, 00:14:04.156 "method": "ublk_stop_disk", 00:14:04.156 "req_id": 1 00:14:04.156 } 00:14:04.156 Got JSON-RPC error response 00:14:04.156 response: 00:14:04.156 { 00:14:04.156 "code": -19, 00:14:04.156 "message": "No such device" 00:14:04.156 } 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.156 04:04:55 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 [2024-10-13 04:04:55.196688] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:04.156 [2024-10-13 04:04:55.200227] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:04.156 [2024-10-13 04:04:55.200259] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:04.156 04:04:55 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:04.156 00:14:04.156 real 0m11.234s 00:14:04.156 user 0m0.633s 00:14:04.156 sys 0m1.583s 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 ************************************ 00:14:04.156 END TEST test_create_ublk 00:14:04.156 ************************************ 00:14:04.156 04:04:55 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:04.156 04:04:55 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:04.156 04:04:55 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.156 04:04:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 ************************************ 00:14:04.156 START TEST test_create_multi_ublk 00:14:04.156 ************************************ 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 [2024-10-13 04:04:55.703634] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:04.156 [2024-10-13 04:04:55.705201] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:04.156 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 [2024-10-13 04:04:55.927735] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:04.157 [2024-10-13 04:04:55.928047] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:04.157 [2024-10-13 04:04:55.928060] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:04.157 [2024-10-13 04:04:55.928068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.157 [2024-10-13 04:04:55.939676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.157 [2024-10-13 04:04:55.939697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.157 [2024-10-13 04:04:55.951635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.157 [2024-10-13 04:04:55.952128] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:04.157 [2024-10-13 04:04:55.966629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.157 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:04.157 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.157 04:04:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:04.157 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 [2024-10-13 04:04:56.185730] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:04.157 [2024-10-13 04:04:56.186041] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:04.157 [2024-10-13 04:04:56.186055] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:04.157 [2024-10-13 04:04:56.186060] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.157 [2024-10-13 04:04:56.193669] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.157 [2024-10-13 04:04:56.193688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.157 [2024-10-13 04:04:56.201633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.157 [2024-10-13 04:04:56.202134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:04.157 [2024-10-13 04:04:56.212631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 [2024-10-13 04:04:56.371737] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:04.157 [2024-10-13 04:04:56.372048] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:04.157 [2024-10-13 04:04:56.372061] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:04.157 [2024-10-13 04:04:56.372068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.157 [2024-10-13 04:04:56.379659] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.157 [2024-10-13 04:04:56.379680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.157 [2024-10-13 04:04:56.387637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.157 [2024-10-13 04:04:56.388133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:04.157 [2024-10-13 04:04:56.404636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 [2024-10-13 04:04:56.563725] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:04.157 [2024-10-13 04:04:56.564021] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:04.157 [2024-10-13 04:04:56.564036] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:04.157 [2024-10-13 04:04:56.564041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:04.157 [2024-10-13 04:04:56.571659] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:04.157 [2024-10-13 04:04:56.571677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:04.157 [2024-10-13 04:04:56.579636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:04.157 [2024-10-13 04:04:56.580137] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:04.157 [2024-10-13 04:04:56.585330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:04.157 { 00:14:04.157 "ublk_device": "/dev/ublkb0", 00:14:04.157 "id": 0, 00:14:04.157 "queue_depth": 512, 00:14:04.157 "num_queues": 4, 00:14:04.157 "bdev_name": "Malloc0" 00:14:04.157 }, 00:14:04.157 { 00:14:04.157 "ublk_device": "/dev/ublkb1", 00:14:04.157 "id": 1, 00:14:04.157 "queue_depth": 512, 00:14:04.157 "num_queues": 4, 00:14:04.157 "bdev_name": "Malloc1" 00:14:04.157 }, 00:14:04.157 { 00:14:04.157 "ublk_device": "/dev/ublkb2", 00:14:04.157 "id": 2, 00:14:04.157 "queue_depth": 512, 00:14:04.157 "num_queues": 4, 00:14:04.157 "bdev_name": "Malloc2" 00:14:04.157 }, 00:14:04.157 { 00:14:04.157 "ublk_device": "/dev/ublkb3", 00:14:04.157 "id": 3, 00:14:04.157 "queue_depth": 512, 00:14:04.157 "num_queues": 4, 00:14:04.157 "bdev_name": "Malloc3" 00:14:04.157 } 00:14:04.157 ]' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:04.157 04:04:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:04.157 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.158 [2024-10-13 04:04:57.256729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.158 [2024-10-13 04:04:57.289108] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.158 [2024-10-13 04:04:57.290055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.158 [2024-10-13 04:04:57.296643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.158 [2024-10-13 04:04:57.296873] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:04.158 [2024-10-13 04:04:57.296886] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.158 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.158 [2024-10-13 04:04:57.312692] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.418 [2024-10-13 04:04:57.341986] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.418 [2024-10-13 04:04:57.343084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.418 [2024-10-13 04:04:57.351635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.419 [2024-10-13 04:04:57.351863] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:04.419 [2024-10-13 04:04:57.351879] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.419 [2024-10-13 04:04:57.367697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.419 [2024-10-13 04:04:57.402676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.419 [2024-10-13 04:04:57.403294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.419 [2024-10-13 04:04:57.411666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.419 [2024-10-13 04:04:57.411898] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:04.419 [2024-10-13 04:04:57.411953] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.419 [2024-10-13 04:04:57.426696] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:04.419 [2024-10-13 04:04:57.459071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:04.419 [2024-10-13 04:04:57.459977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:04.419 [2024-10-13 04:04:57.466637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:04.419 [2024-10-13 04:04:57.466846] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:04.419 [2024-10-13 04:04:57.466858] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.419 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:04.679 [2024-10-13 04:04:57.658696] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:04.679 [2024-10-13 04:04:57.662171] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:04.680 [2024-10-13 04:04:57.662202] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:04.680 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:04.680 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.680 04:04:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:04.680 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.680 04:04:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:04.939 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.939 04:04:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:04.939 04:04:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:04.939 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.939 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.510 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:05.771 00:14:05.771 real 0m3.166s 00:14:05.771 user 0m0.803s 00:14:05.771 sys 0m0.155s 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.771 04:04:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 ************************************ 00:14:05.771 END TEST test_create_multi_ublk 00:14:05.771 ************************************ 00:14:05.771 04:04:58 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:05.771 04:04:58 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:05.771 04:04:58 ublk -- ublk/ublk.sh@130 -- # killprocess 71202 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@950 -- # '[' -z 71202 ']' 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@954 -- # kill -0 71202 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@955 -- # uname 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71202 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.771 killing process with pid 71202 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71202' 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@969 -- # kill 71202 00:14:05.771 04:04:58 ublk -- common/autotest_common.sh@974 -- # wait 71202 00:14:06.343 [2024-10-13 04:04:59.452590] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:06.343 [2024-10-13 04:04:59.452650] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:07.284 00:14:07.284 real 0m24.598s 00:14:07.284 user 0m35.204s 00:14:07.284 sys 0m9.654s 00:14:07.284 04:05:00 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.284 ************************************ 00:14:07.284 END TEST ublk 00:14:07.284 ************************************ 00:14:07.284 04:05:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:07.284 04:05:00 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.284 04:05:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:07.284 04:05:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.284 04:05:00 -- common/autotest_common.sh@10 -- # set +x 00:14:07.284 ************************************ 00:14:07.284 START TEST ublk_recovery 00:14:07.284 ************************************ 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:07.284 * Looking for test storage... 00:14:07.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.284 04:05:00 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:07.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.284 --rc genhtml_branch_coverage=1 00:14:07.284 --rc genhtml_function_coverage=1 00:14:07.284 --rc genhtml_legend=1 00:14:07.284 --rc geninfo_all_blocks=1 00:14:07.284 --rc geninfo_unexecuted_blocks=1 00:14:07.284 00:14:07.284 ' 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:07.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.284 --rc genhtml_branch_coverage=1 00:14:07.284 --rc genhtml_function_coverage=1 00:14:07.284 --rc genhtml_legend=1 00:14:07.284 --rc geninfo_all_blocks=1 00:14:07.284 --rc geninfo_unexecuted_blocks=1 00:14:07.284 00:14:07.284 ' 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:07.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.284 --rc genhtml_branch_coverage=1 00:14:07.284 --rc genhtml_function_coverage=1 00:14:07.284 --rc genhtml_legend=1 00:14:07.284 --rc geninfo_all_blocks=1 00:14:07.284 --rc geninfo_unexecuted_blocks=1 00:14:07.284 00:14:07.284 ' 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:07.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.284 --rc genhtml_branch_coverage=1 00:14:07.284 --rc genhtml_function_coverage=1 00:14:07.284 --rc genhtml_legend=1 00:14:07.284 --rc geninfo_all_blocks=1 00:14:07.284 --rc geninfo_unexecuted_blocks=1 00:14:07.284 00:14:07.284 ' 00:14:07.284 04:05:00 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:07.284 04:05:00 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:07.284 04:05:00 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:07.284 04:05:00 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71591 00:14:07.284 04:05:00 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.284 04:05:00 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71591 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71591 ']' 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.284 04:05:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.284 04:05:00 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:07.284 [2024-10-13 04:05:00.350839] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:14:07.284 [2024-10-13 04:05:00.350951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71591 ] 00:14:07.545 [2024-10-13 04:05:00.501012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:07.545 [2024-10-13 04:05:00.608338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.545 [2024-10-13 04:05:00.608419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.117 04:05:01 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.117 04:05:01 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:08.117 04:05:01 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:08.117 04:05:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.117 04:05:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.117 [2024-10-13 04:05:01.194633] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:08.117 [2024-10-13 04:05:01.196492] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:08.117 04:05:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.117 04:05:01 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:08.117 04:05:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.117 04:05:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.377 malloc0 00:14:08.377 04:05:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.377 04:05:01 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:08.377 04:05:01 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.377 04:05:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:08.377 [2024-10-13 04:05:01.298763] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:08.377 [2024-10-13 04:05:01.298859] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:08.377 [2024-10-13 04:05:01.298870] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:08.377 [2024-10-13 04:05:01.298877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:08.377 [2024-10-13 04:05:01.306726] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:08.377 [2024-10-13 04:05:01.306745] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:08.377 [2024-10-13 04:05:01.314641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:08.377 [2024-10-13 04:05:01.314780] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:08.377 [2024-10-13 04:05:01.331639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:08.377 1 00:14:08.377 04:05:01 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.377 04:05:01 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:09.320 04:05:02 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71626 00:14:09.320 04:05:02 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:09.320 04:05:02 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:09.320 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:09.320 fio-3.35 00:14:09.320 Starting 1 process 00:14:14.602 04:05:07 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71591 00:14:14.602 04:05:07 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:19.881 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71591 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:19.881 04:05:12 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71742 00:14:19.881 04:05:12 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:19.881 04:05:12 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.881 04:05:12 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71742 00:14:19.881 04:05:12 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71742 ']' 00:14:19.881 04:05:12 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.881 04:05:12 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.881 04:05:12 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.881 04:05:12 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.881 04:05:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:19.881 [2024-10-13 04:05:12.426119] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:14:19.881 [2024-10-13 04:05:12.426237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71742 ] 00:14:19.881 [2024-10-13 04:05:12.577230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:19.881 [2024-10-13 04:05:12.673638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.881 [2024-10-13 04:05:12.673660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.139 04:05:13 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.139 04:05:13 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:20.139 04:05:13 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:20.139 04:05:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.139 04:05:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.139 [2024-10-13 04:05:13.266634] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:20.139 [2024-10-13 04:05:13.268476] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:20.139 04:05:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.139 04:05:13 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:20.139 04:05:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.139 04:05:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.397 malloc0 00:14:20.397 04:05:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.397 04:05:13 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:20.397 04:05:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.397 04:05:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:20.397 [2024-10-13 04:05:13.370769] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:20.397 [2024-10-13 04:05:13.370808] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:20.397 [2024-10-13 04:05:13.370818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:20.398 [2024-10-13 04:05:13.378665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:20.398 [2024-10-13 04:05:13.378689] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:20.398 1 00:14:20.398 04:05:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.398 04:05:13 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71626 00:14:21.332 [2024-10-13 04:05:14.378724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:21.332 [2024-10-13 04:05:14.386641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:21.332 [2024-10-13 04:05:14.386664] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:22.266 [2024-10-13 04:05:15.386691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:22.266 [2024-10-13 04:05:15.390633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:22.266 [2024-10-13 04:05:15.390649] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.640 [2024-10-13 04:05:16.390672] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:23.640 [2024-10-13 04:05:16.395642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:23.640 [2024-10-13 04:05:16.395657] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:23.640 [2024-10-13 04:05:16.395665] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:23.640 [2024-10-13 04:05:16.395731] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:45.562 [2024-10-13 04:05:37.429639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:45.562 [2024-10-13 04:05:37.436241] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:45.562 [2024-10-13 04:05:37.441816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:45.562 [2024-10-13 04:05:37.441836] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:12.092 00:15:12.092 fio_test: (groupid=0, jobs=1): err= 0: pid=71629: Sun Oct 13 04:06:02 2024 00:15:12.092 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(3442MiB/60001msec) 00:15:12.092 slat (nsec): min=982, max=128282, avg=4950.32, stdev=1531.14 00:15:12.092 clat (usec): min=660, max=30107k, avg=4248.45, stdev=252505.39 00:15:12.092 lat (usec): min=674, max=30107k, avg=4253.40, stdev=252505.39 00:15:12.092 clat percentiles (usec): 00:15:12.092 | 1.00th=[ 1729], 5.00th=[ 1827], 10.00th=[ 1876], 20.00th=[ 1942], 00:15:12.092 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2024], 60.00th=[ 2040], 00:15:12.092 | 70.00th=[ 2057], 80.00th=[ 2089], 90.00th=[ 2180], 95.00th=[ 2966], 00:15:12.092 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7111], 99.95th=[ 8225], 00:15:12.092 | 99.99th=[13304] 00:15:12.092 bw ( KiB/s): min=12742, max=130712, per=100.00%, avg=115652.23, stdev=17866.70, samples=60 00:15:12.092 iops : min= 3185, max=32678, avg=28913.05, stdev=4466.72, samples=60 00:15:12.092 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(3437MiB/60001msec); 0 zone resets 00:15:12.092 slat (nsec): min=953, max=147578, avg=4974.38, stdev=1568.43 00:15:12.092 clat (usec): min=633, max=30107k, avg=4463.32, stdev=260725.37 00:15:12.092 lat (usec): min=637, max=30107k, avg=4468.29, stdev=260725.37 00:15:12.092 clat percentiles (usec): 00:15:12.092 | 1.00th=[ 1778], 5.00th=[ 1909], 10.00th=[ 1958], 20.00th=[ 2024], 00:15:12.092 | 30.00th=[ 2057], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:15:12.092 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 2868], 00:15:12.092 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7177], 99.95th=[ 8455], 00:15:12.092 | 99.99th=[13304] 00:15:12.092 bw ( KiB/s): min=12862, max=129768, per=100.00%, avg=115481.70, stdev=17963.68, samples=60 00:15:12.092 iops : min= 3215, max=32442, avg=28870.42, stdev=4490.97, samples=60 00:15:12.092 lat (usec) : 750=0.01%, 1000=0.01% 00:15:12.092 lat (msec) : 2=28.41%, 4=68.95%, 10=2.60%, 20=0.03%, >=2000=0.01% 00:15:12.092 cpu : usr=3.31%, sys=14.76%, ctx=58537, majf=0, minf=13 00:15:12.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:12.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.092 issued rwts: total=881165,879806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.092 00:15:12.092 Run status group 0 (all jobs): 00:15:12.092 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=3442MiB (3609MB), run=60001-60001msec 00:15:12.092 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=3437MiB (3604MB), run=60001-60001msec 00:15:12.092 00:15:12.092 Disk stats (read/write): 00:15:12.092 ublkb1: ios=878094/876669, merge=0/0, ticks=3691412/3803095, in_queue=7494508, util=99.89% 00:15:12.092 04:06:02 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.092 [2024-10-13 04:06:02.589392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:12.092 [2024-10-13 04:06:02.628665] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:12.092 [2024-10-13 04:06:02.628826] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:12.092 [2024-10-13 04:06:02.636645] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:12.092 [2024-10-13 04:06:02.636752] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:12.092 [2024-10-13 04:06:02.636759] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.092 04:06:02 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.092 [2024-10-13 04:06:02.652714] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:12.092 [2024-10-13 04:06:02.660629] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:12.092 [2024-10-13 04:06:02.660663] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.092 04:06:02 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:12.092 04:06:02 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:12.092 04:06:02 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71742 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71742 ']' 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71742 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71742 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.092 killing process with pid 71742 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71742' 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71742 00:15:12.092 04:06:02 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71742 00:15:12.092 [2024-10-13 04:06:03.793243] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:12.092 [2024-10-13 04:06:03.793295] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:12.092 00:15:12.092 real 1m4.356s 00:15:12.092 user 1m48.854s 00:15:12.092 sys 0m19.915s 00:15:12.092 04:06:04 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.092 ************************************ 00:15:12.092 END TEST ublk_recovery 00:15:12.092 ************************************ 00:15:12.092 04:06:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.092 04:06:04 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:12.092 04:06:04 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:12.092 04:06:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.093 04:06:04 -- common/autotest_common.sh@10 -- # set +x 00:15:12.093 04:06:04 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:12.093 04:06:04 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.093 04:06:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:12.093 04:06:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.093 04:06:04 -- common/autotest_common.sh@10 -- # set +x 00:15:12.093 ************************************ 00:15:12.093 START TEST ftl 00:15:12.093 ************************************ 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.093 * Looking for test storage... 00:15:12.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.093 04:06:04 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.093 04:06:04 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.093 04:06:04 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.093 04:06:04 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.093 04:06:04 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.093 04:06:04 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.093 04:06:04 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.093 04:06:04 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:12.093 04:06:04 ftl -- scripts/common.sh@345 -- # : 1 00:15:12.093 04:06:04 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.093 04:06:04 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.093 04:06:04 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:12.093 04:06:04 ftl -- scripts/common.sh@353 -- # local d=1 00:15:12.093 04:06:04 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.093 04:06:04 ftl -- scripts/common.sh@355 -- # echo 1 00:15:12.093 04:06:04 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.093 04:06:04 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@353 -- # local d=2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.093 04:06:04 ftl -- scripts/common.sh@355 -- # echo 2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.093 04:06:04 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.093 04:06:04 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.093 04:06:04 ftl -- scripts/common.sh@368 -- # return 0 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.093 --rc genhtml_branch_coverage=1 00:15:12.093 --rc genhtml_function_coverage=1 00:15:12.093 --rc genhtml_legend=1 00:15:12.093 --rc geninfo_all_blocks=1 00:15:12.093 --rc geninfo_unexecuted_blocks=1 00:15:12.093 00:15:12.093 ' 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.093 --rc genhtml_branch_coverage=1 00:15:12.093 --rc genhtml_function_coverage=1 00:15:12.093 --rc genhtml_legend=1 00:15:12.093 --rc geninfo_all_blocks=1 00:15:12.093 --rc geninfo_unexecuted_blocks=1 00:15:12.093 00:15:12.093 ' 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.093 --rc genhtml_branch_coverage=1 00:15:12.093 --rc genhtml_function_coverage=1 00:15:12.093 --rc genhtml_legend=1 00:15:12.093 --rc geninfo_all_blocks=1 00:15:12.093 --rc geninfo_unexecuted_blocks=1 00:15:12.093 00:15:12.093 ' 00:15:12.093 04:06:04 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:12.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.093 --rc genhtml_branch_coverage=1 00:15:12.093 --rc genhtml_function_coverage=1 00:15:12.093 --rc genhtml_legend=1 00:15:12.093 --rc geninfo_all_blocks=1 00:15:12.093 --rc geninfo_unexecuted_blocks=1 00:15:12.093 00:15:12.093 ' 00:15:12.093 04:06:04 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:12.093 04:06:04 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:12.093 04:06:04 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.093 04:06:04 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:12.093 04:06:04 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:12.093 04:06:04 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:12.093 04:06:04 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.093 04:06:04 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:12.093 04:06:04 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:12.093 04:06:04 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.093 04:06:04 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.093 04:06:04 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:12.093 04:06:04 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:12.093 04:06:04 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:12.093 04:06:04 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:12.093 04:06:04 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:12.093 04:06:04 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:12.093 04:06:04 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.093 04:06:04 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:12.093 04:06:04 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:12.093 04:06:04 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:12.093 04:06:04 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:12.093 04:06:04 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:12.093 04:06:04 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:12.093 04:06:04 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:12.093 04:06:04 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:12.093 04:06:04 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:12.093 04:06:04 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:12.093 04:06:04 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:12.093 04:06:04 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.093 04:06:04 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:12.093 04:06:04 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:12.093 04:06:04 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:12.093 04:06:04 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:12.093 04:06:04 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:12.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:12.093 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.093 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.093 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.093 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:12.093 04:06:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72547 00:15:12.093 04:06:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72547 00:15:12.093 04:06:05 ftl -- common/autotest_common.sh@831 -- # '[' -z 72547 ']' 00:15:12.093 04:06:05 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.093 04:06:05 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.093 04:06:05 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.093 04:06:05 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.093 04:06:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:12.093 04:06:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:12.093 [2024-10-13 04:06:05.199401] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:15:12.093 [2024-10-13 04:06:05.199526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72547 ] 00:15:12.352 [2024-10-13 04:06:05.347206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.352 [2024-10-13 04:06:05.428577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.919 04:06:06 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.919 04:06:06 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:12.919 04:06:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:13.177 04:06:06 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:13.744 04:06:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:13.745 04:06:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:14.310 04:06:07 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:14.310 04:06:07 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:14.310 04:06:07 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@50 -- # break 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:14.568 04:06:07 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:14.826 04:06:07 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:14.826 04:06:07 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:14.826 04:06:07 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:14.826 04:06:07 ftl -- ftl/ftl.sh@63 -- # break 00:15:14.826 04:06:07 ftl -- ftl/ftl.sh@66 -- # killprocess 72547 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@950 -- # '[' -z 72547 ']' 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@954 -- # kill -0 72547 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@955 -- # uname 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72547 00:15:14.826 killing process with pid 72547 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72547' 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@969 -- # kill 72547 00:15:14.826 04:06:07 ftl -- common/autotest_common.sh@974 -- # wait 72547 00:15:16.201 04:06:09 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:16.201 04:06:09 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:16.201 04:06:09 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:16.201 04:06:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.201 04:06:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:16.201 ************************************ 00:15:16.201 START TEST ftl_fio_basic 00:15:16.201 ************************************ 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:16.201 * Looking for test storage... 00:15:16.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:16.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.201 --rc genhtml_branch_coverage=1 00:15:16.201 --rc genhtml_function_coverage=1 00:15:16.201 --rc genhtml_legend=1 00:15:16.201 --rc geninfo_all_blocks=1 00:15:16.201 --rc geninfo_unexecuted_blocks=1 00:15:16.201 00:15:16.201 ' 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:16.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.201 --rc genhtml_branch_coverage=1 00:15:16.201 --rc genhtml_function_coverage=1 00:15:16.201 --rc genhtml_legend=1 00:15:16.201 --rc geninfo_all_blocks=1 00:15:16.201 --rc geninfo_unexecuted_blocks=1 00:15:16.201 00:15:16.201 ' 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:16.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.201 --rc genhtml_branch_coverage=1 00:15:16.201 --rc genhtml_function_coverage=1 00:15:16.201 --rc genhtml_legend=1 00:15:16.201 --rc geninfo_all_blocks=1 00:15:16.201 --rc geninfo_unexecuted_blocks=1 00:15:16.201 00:15:16.201 ' 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:16.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.201 --rc genhtml_branch_coverage=1 00:15:16.201 --rc genhtml_function_coverage=1 00:15:16.201 --rc genhtml_legend=1 00:15:16.201 --rc geninfo_all_blocks=1 00:15:16.201 --rc geninfo_unexecuted_blocks=1 00:15:16.201 00:15:16.201 ' 00:15:16.201 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72679 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72679 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72679 ']' 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.202 04:06:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:16.460 [2024-10-13 04:06:09.382493] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:15:16.460 [2024-10-13 04:06:09.382659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72679 ] 00:15:16.460 [2024-10-13 04:06:09.529481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.719 [2024-10-13 04:06:09.632018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.719 [2024-10-13 04:06:09.632190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.719 [2024-10-13 04:06:09.632205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.284 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.285 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:17.285 04:06:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:17.285 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:17.285 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:17.285 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:17.285 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:17.285 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:17.543 { 00:15:17.543 "name": "nvme0n1", 00:15:17.543 "aliases": [ 00:15:17.543 "9f119677-399c-412c-9bf7-19d7c5b53de3" 00:15:17.543 ], 00:15:17.543 "product_name": "NVMe disk", 00:15:17.543 "block_size": 4096, 00:15:17.543 "num_blocks": 1310720, 00:15:17.543 "uuid": "9f119677-399c-412c-9bf7-19d7c5b53de3", 00:15:17.543 "numa_id": -1, 00:15:17.543 "assigned_rate_limits": { 00:15:17.543 "rw_ios_per_sec": 0, 00:15:17.543 "rw_mbytes_per_sec": 0, 00:15:17.543 "r_mbytes_per_sec": 0, 00:15:17.543 "w_mbytes_per_sec": 0 00:15:17.543 }, 00:15:17.543 "claimed": false, 00:15:17.543 "zoned": false, 00:15:17.543 "supported_io_types": { 00:15:17.543 "read": true, 00:15:17.543 "write": true, 00:15:17.543 "unmap": true, 00:15:17.543 "flush": true, 00:15:17.543 "reset": true, 00:15:17.543 "nvme_admin": true, 00:15:17.543 "nvme_io": true, 00:15:17.543 "nvme_io_md": false, 00:15:17.543 "write_zeroes": true, 00:15:17.543 "zcopy": false, 00:15:17.543 "get_zone_info": false, 00:15:17.543 "zone_management": false, 00:15:17.543 "zone_append": false, 00:15:17.543 "compare": true, 00:15:17.543 "compare_and_write": false, 00:15:17.543 "abort": true, 00:15:17.543 "seek_hole": false, 00:15:17.543 "seek_data": false, 00:15:17.543 "copy": true, 00:15:17.543 "nvme_iov_md": false 00:15:17.543 }, 00:15:17.543 "driver_specific": { 00:15:17.543 "nvme": [ 00:15:17.543 { 00:15:17.543 "pci_address": "0000:00:11.0", 00:15:17.543 "trid": { 00:15:17.543 "trtype": "PCIe", 00:15:17.543 "traddr": "0000:00:11.0" 00:15:17.543 }, 00:15:17.543 "ctrlr_data": { 00:15:17.543 "cntlid": 0, 00:15:17.543 "vendor_id": "0x1b36", 00:15:17.543 "model_number": "QEMU NVMe Ctrl", 00:15:17.543 "serial_number": "12341", 00:15:17.543 "firmware_revision": "8.0.0", 00:15:17.543 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:17.543 "oacs": { 00:15:17.543 "security": 0, 00:15:17.543 "format": 1, 00:15:17.543 "firmware": 0, 00:15:17.543 "ns_manage": 1 00:15:17.543 }, 00:15:17.543 "multi_ctrlr": false, 00:15:17.543 "ana_reporting": false 00:15:17.543 }, 00:15:17.543 "vs": { 00:15:17.543 "nvme_version": "1.4" 00:15:17.543 }, 00:15:17.543 "ns_data": { 00:15:17.543 "id": 1, 00:15:17.543 "can_share": false 00:15:17.543 } 00:15:17.543 } 00:15:17.543 ], 00:15:17.543 "mp_policy": "active_passive" 00:15:17.543 } 00:15:17.543 } 00:15:17.543 ]' 00:15:17.543 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:17.802 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:18.061 04:06:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:18.061 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=149da3ed-ccf0-492d-be01-74feff88cf0a 00:15:18.061 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 149da3ed-ccf0-492d-be01-74feff88cf0a 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=590a3665-b652-41ad-ab83-712d545836fa 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 590a3665-b652-41ad-ab83-712d545836fa 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=590a3665-b652-41ad-ab83-712d545836fa 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 590a3665-b652-41ad-ab83-712d545836fa 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=590a3665-b652-41ad-ab83-712d545836fa 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:18.319 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 590a3665-b652-41ad-ab83-712d545836fa 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:18.578 { 00:15:18.578 "name": "590a3665-b652-41ad-ab83-712d545836fa", 00:15:18.578 "aliases": [ 00:15:18.578 "lvs/nvme0n1p0" 00:15:18.578 ], 00:15:18.578 "product_name": "Logical Volume", 00:15:18.578 "block_size": 4096, 00:15:18.578 "num_blocks": 26476544, 00:15:18.578 "uuid": "590a3665-b652-41ad-ab83-712d545836fa", 00:15:18.578 "assigned_rate_limits": { 00:15:18.578 "rw_ios_per_sec": 0, 00:15:18.578 "rw_mbytes_per_sec": 0, 00:15:18.578 "r_mbytes_per_sec": 0, 00:15:18.578 "w_mbytes_per_sec": 0 00:15:18.578 }, 00:15:18.578 "claimed": false, 00:15:18.578 "zoned": false, 00:15:18.578 "supported_io_types": { 00:15:18.578 "read": true, 00:15:18.578 "write": true, 00:15:18.578 "unmap": true, 00:15:18.578 "flush": false, 00:15:18.578 "reset": true, 00:15:18.578 "nvme_admin": false, 00:15:18.578 "nvme_io": false, 00:15:18.578 "nvme_io_md": false, 00:15:18.578 "write_zeroes": true, 00:15:18.578 "zcopy": false, 00:15:18.578 "get_zone_info": false, 00:15:18.578 "zone_management": false, 00:15:18.578 "zone_append": false, 00:15:18.578 "compare": false, 00:15:18.578 "compare_and_write": false, 00:15:18.578 "abort": false, 00:15:18.578 "seek_hole": true, 00:15:18.578 "seek_data": true, 00:15:18.578 "copy": false, 00:15:18.578 "nvme_iov_md": false 00:15:18.578 }, 00:15:18.578 "driver_specific": { 00:15:18.578 "lvol": { 00:15:18.578 "lvol_store_uuid": "149da3ed-ccf0-492d-be01-74feff88cf0a", 00:15:18.578 "base_bdev": "nvme0n1", 00:15:18.578 "thin_provision": true, 00:15:18.578 "num_allocated_clusters": 0, 00:15:18.578 "snapshot": false, 00:15:18.578 "clone": false, 00:15:18.578 "esnap_clone": false 00:15:18.578 } 00:15:18.578 } 00:15:18.578 } 00:15:18.578 ]' 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:18.578 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 590a3665-b652-41ad-ab83-712d545836fa 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=590a3665-b652-41ad-ab83-712d545836fa 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:18.836 04:06:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 590a3665-b652-41ad-ab83-712d545836fa 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:19.095 { 00:15:19.095 "name": "590a3665-b652-41ad-ab83-712d545836fa", 00:15:19.095 "aliases": [ 00:15:19.095 "lvs/nvme0n1p0" 00:15:19.095 ], 00:15:19.095 "product_name": "Logical Volume", 00:15:19.095 "block_size": 4096, 00:15:19.095 "num_blocks": 26476544, 00:15:19.095 "uuid": "590a3665-b652-41ad-ab83-712d545836fa", 00:15:19.095 "assigned_rate_limits": { 00:15:19.095 "rw_ios_per_sec": 0, 00:15:19.095 "rw_mbytes_per_sec": 0, 00:15:19.095 "r_mbytes_per_sec": 0, 00:15:19.095 "w_mbytes_per_sec": 0 00:15:19.095 }, 00:15:19.095 "claimed": false, 00:15:19.095 "zoned": false, 00:15:19.095 "supported_io_types": { 00:15:19.095 "read": true, 00:15:19.095 "write": true, 00:15:19.095 "unmap": true, 00:15:19.095 "flush": false, 00:15:19.095 "reset": true, 00:15:19.095 "nvme_admin": false, 00:15:19.095 "nvme_io": false, 00:15:19.095 "nvme_io_md": false, 00:15:19.095 "write_zeroes": true, 00:15:19.095 "zcopy": false, 00:15:19.095 "get_zone_info": false, 00:15:19.095 "zone_management": false, 00:15:19.095 "zone_append": false, 00:15:19.095 "compare": false, 00:15:19.095 "compare_and_write": false, 00:15:19.095 "abort": false, 00:15:19.095 "seek_hole": true, 00:15:19.095 "seek_data": true, 00:15:19.095 "copy": false, 00:15:19.095 "nvme_iov_md": false 00:15:19.095 }, 00:15:19.095 "driver_specific": { 00:15:19.095 "lvol": { 00:15:19.095 "lvol_store_uuid": "149da3ed-ccf0-492d-be01-74feff88cf0a", 00:15:19.095 "base_bdev": "nvme0n1", 00:15:19.095 "thin_provision": true, 00:15:19.095 "num_allocated_clusters": 0, 00:15:19.095 "snapshot": false, 00:15:19.095 "clone": false, 00:15:19.095 "esnap_clone": false 00:15:19.095 } 00:15:19.095 } 00:15:19.095 } 00:15:19.095 ]' 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:19.095 04:06:12 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:19.354 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 590a3665-b652-41ad-ab83-712d545836fa 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=590a3665-b652-41ad-ab83-712d545836fa 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:19.354 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 590a3665-b652-41ad-ab83-712d545836fa 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:19.612 { 00:15:19.612 "name": "590a3665-b652-41ad-ab83-712d545836fa", 00:15:19.612 "aliases": [ 00:15:19.612 "lvs/nvme0n1p0" 00:15:19.612 ], 00:15:19.612 "product_name": "Logical Volume", 00:15:19.612 "block_size": 4096, 00:15:19.612 "num_blocks": 26476544, 00:15:19.612 "uuid": "590a3665-b652-41ad-ab83-712d545836fa", 00:15:19.612 "assigned_rate_limits": { 00:15:19.612 "rw_ios_per_sec": 0, 00:15:19.612 "rw_mbytes_per_sec": 0, 00:15:19.612 "r_mbytes_per_sec": 0, 00:15:19.612 "w_mbytes_per_sec": 0 00:15:19.612 }, 00:15:19.612 "claimed": false, 00:15:19.612 "zoned": false, 00:15:19.612 "supported_io_types": { 00:15:19.612 "read": true, 00:15:19.612 "write": true, 00:15:19.612 "unmap": true, 00:15:19.612 "flush": false, 00:15:19.612 "reset": true, 00:15:19.612 "nvme_admin": false, 00:15:19.612 "nvme_io": false, 00:15:19.612 "nvme_io_md": false, 00:15:19.612 "write_zeroes": true, 00:15:19.612 "zcopy": false, 00:15:19.612 "get_zone_info": false, 00:15:19.612 "zone_management": false, 00:15:19.612 "zone_append": false, 00:15:19.612 "compare": false, 00:15:19.612 "compare_and_write": false, 00:15:19.612 "abort": false, 00:15:19.612 "seek_hole": true, 00:15:19.612 "seek_data": true, 00:15:19.612 "copy": false, 00:15:19.612 "nvme_iov_md": false 00:15:19.612 }, 00:15:19.612 "driver_specific": { 00:15:19.612 "lvol": { 00:15:19.612 "lvol_store_uuid": "149da3ed-ccf0-492d-be01-74feff88cf0a", 00:15:19.612 "base_bdev": "nvme0n1", 00:15:19.612 "thin_provision": true, 00:15:19.612 "num_allocated_clusters": 0, 00:15:19.612 "snapshot": false, 00:15:19.612 "clone": false, 00:15:19.612 "esnap_clone": false 00:15:19.612 } 00:15:19.612 } 00:15:19.612 } 00:15:19.612 ]' 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:19.612 04:06:12 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 590a3665-b652-41ad-ab83-712d545836fa -c nvc0n1p0 --l2p_dram_limit 60 00:15:19.871 [2024-10-13 04:06:12.792298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.871 [2024-10-13 04:06:12.792341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:19.871 [2024-10-13 04:06:12.792353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:19.872 [2024-10-13 04:06:12.792360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.792413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.792422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:19.872 [2024-10-13 04:06:12.792430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:19.872 [2024-10-13 04:06:12.792437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.792467] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:19.872 [2024-10-13 04:06:12.795522] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:19.872 [2024-10-13 04:06:12.795560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.795568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:19.872 [2024-10-13 04:06:12.795577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:15:19.872 [2024-10-13 04:06:12.795583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.795747] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 998cda62-609e-4ab9-a7b7-f81aee80cd24 00:15:19.872 [2024-10-13 04:06:12.796733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.796763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:19.872 [2024-10-13 04:06:12.796771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:15:19.872 [2024-10-13 04:06:12.796780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.801566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.801595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:19.872 [2024-10-13 04:06:12.801602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:15:19.872 [2024-10-13 04:06:12.801610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.801696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.801706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:19.872 [2024-10-13 04:06:12.801715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:19.872 [2024-10-13 04:06:12.801724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.801761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.801771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:19.872 [2024-10-13 04:06:12.801777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:19.872 [2024-10-13 04:06:12.801784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.801806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:19.872 [2024-10-13 04:06:12.804682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.804706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:19.872 [2024-10-13 04:06:12.804716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:15:19.872 [2024-10-13 04:06:12.804722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.804753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.804761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:19.872 [2024-10-13 04:06:12.804768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:19.872 [2024-10-13 04:06:12.804774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.804792] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:19.872 [2024-10-13 04:06:12.804905] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:19.872 [2024-10-13 04:06:12.804920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:19.872 [2024-10-13 04:06:12.804928] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:19.872 [2024-10-13 04:06:12.804937] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:19.872 [2024-10-13 04:06:12.804944] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:19.872 [2024-10-13 04:06:12.804952] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:19.872 [2024-10-13 04:06:12.804958] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:19.872 [2024-10-13 04:06:12.804965] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:19.872 [2024-10-13 04:06:12.804970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:19.872 [2024-10-13 04:06:12.804977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.804983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:19.872 [2024-10-13 04:06:12.804991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:15:19.872 [2024-10-13 04:06:12.805000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.805067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.872 [2024-10-13 04:06:12.805074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:19.872 [2024-10-13 04:06:12.805081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:19.872 [2024-10-13 04:06:12.805086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.872 [2024-10-13 04:06:12.805176] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:19.872 [2024-10-13 04:06:12.805191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:19.872 [2024-10-13 04:06:12.805199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:19.872 [2024-10-13 04:06:12.805220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:19.872 [2024-10-13 04:06:12.805239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:19.872 [2024-10-13 04:06:12.805252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:19.872 [2024-10-13 04:06:12.805257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:19.872 [2024-10-13 04:06:12.805263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:19.872 [2024-10-13 04:06:12.805268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:19.872 [2024-10-13 04:06:12.805278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:19.872 [2024-10-13 04:06:12.805283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:19.872 [2024-10-13 04:06:12.805297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:19.872 [2024-10-13 04:06:12.805314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:19.872 [2024-10-13 04:06:12.805331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:19.872 [2024-10-13 04:06:12.805349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:19.872 [2024-10-13 04:06:12.805365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:19.872 [2024-10-13 04:06:12.805384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:19.872 [2024-10-13 04:06:12.805396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:19.872 [2024-10-13 04:06:12.805411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:19.872 [2024-10-13 04:06:12.805417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:19.872 [2024-10-13 04:06:12.805422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:19.872 [2024-10-13 04:06:12.805428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:19.872 [2024-10-13 04:06:12.805433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:19.872 [2024-10-13 04:06:12.805443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:19.872 [2024-10-13 04:06:12.805450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805455] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:19.872 [2024-10-13 04:06:12.805462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:19.872 [2024-10-13 04:06:12.805468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:19.872 [2024-10-13 04:06:12.805476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:19.872 [2024-10-13 04:06:12.805482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:19.872 [2024-10-13 04:06:12.805490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:19.872 [2024-10-13 04:06:12.805495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:19.872 [2024-10-13 04:06:12.805502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:19.873 [2024-10-13 04:06:12.805506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:19.873 [2024-10-13 04:06:12.805513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:19.873 [2024-10-13 04:06:12.805521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:19.873 [2024-10-13 04:06:12.805530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:19.873 [2024-10-13 04:06:12.805536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:19.873 [2024-10-13 04:06:12.805544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:19.873 [2024-10-13 04:06:12.805549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:19.873 [2024-10-13 04:06:12.805557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:19.873 [2024-10-13 04:06:12.805562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:19.873 [2024-10-13 04:06:12.805569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:19.873 [2024-10-13 04:06:12.805574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:19.873 [2024-10-13 04:06:12.805581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:19.873 [2024-10-13 04:06:12.805586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:19.873 [2024-10-13 04:06:12.805594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:19.873 [2024-10-13 04:06:12.805599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:19.873 [2024-10-13 04:06:12.805607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:19.873 [2024-10-13 04:06:12.805622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:19.873 [2024-10-13 04:06:12.805630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:19.873 [2024-10-13 04:06:12.805635] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:19.873 [2024-10-13 04:06:12.805642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:19.873 [2024-10-13 04:06:12.805648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:19.873 [2024-10-13 04:06:12.805655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:19.873 [2024-10-13 04:06:12.805661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:19.873 [2024-10-13 04:06:12.805668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:19.873 [2024-10-13 04:06:12.805674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:19.873 [2024-10-13 04:06:12.805681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:19.873 [2024-10-13 04:06:12.805689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:15:19.873 [2024-10-13 04:06:12.805697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:19.873 [2024-10-13 04:06:12.805757] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:19.873 [2024-10-13 04:06:12.805768] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:22.404 [2024-10-13 04:06:15.087409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.087476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:22.404 [2024-10-13 04:06:15.087496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2281.639 ms 00:15:22.404 [2024-10-13 04:06:15.087512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.112398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.112454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:22.404 [2024-10-13 04:06:15.112474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.618 ms 00:15:22.404 [2024-10-13 04:06:15.112489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.112658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.112682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:22.404 [2024-10-13 04:06:15.112703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:22.404 [2024-10-13 04:06:15.112722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.154678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.154734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:22.404 [2024-10-13 04:06:15.154753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.890 ms 00:15:22.404 [2024-10-13 04:06:15.154771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.154833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.154854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:22.404 [2024-10-13 04:06:15.154871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:22.404 [2024-10-13 04:06:15.154889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.155275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.155312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:22.404 [2024-10-13 04:06:15.155329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:15:22.404 [2024-10-13 04:06:15.155346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.155519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.155546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:22.404 [2024-10-13 04:06:15.155563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:15:22.404 [2024-10-13 04:06:15.155581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.170794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.170831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:22.404 [2024-10-13 04:06:15.170845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.158 ms 00:15:22.404 [2024-10-13 04:06:15.170860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.182091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:22.404 [2024-10-13 04:06:15.195741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.195790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:22.404 [2024-10-13 04:06:15.195806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.772 ms 00:15:22.404 [2024-10-13 04:06:15.195817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.244521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.244566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:22.404 [2024-10-13 04:06:15.244581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.662 ms 00:15:22.404 [2024-10-13 04:06:15.244590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.404 [2024-10-13 04:06:15.244783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.404 [2024-10-13 04:06:15.244795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:22.404 [2024-10-13 04:06:15.244807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:15:22.404 [2024-10-13 04:06:15.244815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.267601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.267656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:22.405 [2024-10-13 04:06:15.267670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.737 ms 00:15:22.405 [2024-10-13 04:06:15.267680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.290223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.290255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:22.405 [2024-10-13 04:06:15.290268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.502 ms 00:15:22.405 [2024-10-13 04:06:15.290275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.290843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.290865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:22.405 [2024-10-13 04:06:15.290878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:15:22.405 [2024-10-13 04:06:15.290885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.353160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.353193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:22.405 [2024-10-13 04:06:15.353208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.233 ms 00:15:22.405 [2024-10-13 04:06:15.353217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.377277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.377313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:22.405 [2024-10-13 04:06:15.377325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.978 ms 00:15:22.405 [2024-10-13 04:06:15.377333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.400251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.400283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:22.405 [2024-10-13 04:06:15.400294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.877 ms 00:15:22.405 [2024-10-13 04:06:15.400302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.423268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.423300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:22.405 [2024-10-13 04:06:15.423312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.926 ms 00:15:22.405 [2024-10-13 04:06:15.423319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.423365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.423374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:22.405 [2024-10-13 04:06:15.423386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:22.405 [2024-10-13 04:06:15.423394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.423481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:22.405 [2024-10-13 04:06:15.423491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:22.405 [2024-10-13 04:06:15.423500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:22.405 [2024-10-13 04:06:15.423507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:22.405 [2024-10-13 04:06:15.424466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2631.741 ms, result 0 00:15:22.405 { 00:15:22.405 "name": "ftl0", 00:15:22.405 "uuid": "998cda62-609e-4ab9-a7b7-f81aee80cd24" 00:15:22.405 } 00:15:22.405 04:06:15 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:22.405 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:22.405 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:22.405 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:22.405 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:22.405 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:22.405 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:22.666 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:22.666 [ 00:15:22.666 { 00:15:22.666 "name": "ftl0", 00:15:22.666 "aliases": [ 00:15:22.666 "998cda62-609e-4ab9-a7b7-f81aee80cd24" 00:15:22.666 ], 00:15:22.666 "product_name": "FTL disk", 00:15:22.666 "block_size": 4096, 00:15:22.666 "num_blocks": 20971520, 00:15:22.666 "uuid": "998cda62-609e-4ab9-a7b7-f81aee80cd24", 00:15:22.666 "assigned_rate_limits": { 00:15:22.666 "rw_ios_per_sec": 0, 00:15:22.666 "rw_mbytes_per_sec": 0, 00:15:22.666 "r_mbytes_per_sec": 0, 00:15:22.666 "w_mbytes_per_sec": 0 00:15:22.666 }, 00:15:22.666 "claimed": false, 00:15:22.666 "zoned": false, 00:15:22.666 "supported_io_types": { 00:15:22.666 "read": true, 00:15:22.666 "write": true, 00:15:22.666 "unmap": true, 00:15:22.666 "flush": true, 00:15:22.666 "reset": false, 00:15:22.666 "nvme_admin": false, 00:15:22.666 "nvme_io": false, 00:15:22.666 "nvme_io_md": false, 00:15:22.666 "write_zeroes": true, 00:15:22.666 "zcopy": false, 00:15:22.666 "get_zone_info": false, 00:15:22.666 "zone_management": false, 00:15:22.666 "zone_append": false, 00:15:22.666 "compare": false, 00:15:22.666 "compare_and_write": false, 00:15:22.666 "abort": false, 00:15:22.666 "seek_hole": false, 00:15:22.666 "seek_data": false, 00:15:22.666 "copy": false, 00:15:22.666 "nvme_iov_md": false 00:15:22.666 }, 00:15:22.666 "driver_specific": { 00:15:22.666 "ftl": { 00:15:22.666 "base_bdev": "590a3665-b652-41ad-ab83-712d545836fa", 00:15:22.666 "cache": "nvc0n1p0" 00:15:22.666 } 00:15:22.666 } 00:15:22.666 } 00:15:22.666 ] 00:15:22.924 04:06:15 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:22.924 04:06:15 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:22.924 04:06:15 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:22.924 04:06:15 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:22.924 04:06:15 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:23.184 [2024-10-13 04:06:16.124907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.124953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:23.184 [2024-10-13 04:06:16.124965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:23.184 [2024-10-13 04:06:16.124974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.125009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:23.184 [2024-10-13 04:06:16.127588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.127626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:23.184 [2024-10-13 04:06:16.127638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:15:23.184 [2024-10-13 04:06:16.127646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.128056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.128072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:23.184 [2024-10-13 04:06:16.128083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:15:23.184 [2024-10-13 04:06:16.128091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.131314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.131336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:23.184 [2024-10-13 04:06:16.131348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:15:23.184 [2024-10-13 04:06:16.131360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.137582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.137619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:23.184 [2024-10-13 04:06:16.137631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.198 ms 00:15:23.184 [2024-10-13 04:06:16.137640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.161437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.161475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:23.184 [2024-10-13 04:06:16.161487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.723 ms 00:15:23.184 [2024-10-13 04:06:16.161494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.176219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.176253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:23.184 [2024-10-13 04:06:16.176265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.664 ms 00:15:23.184 [2024-10-13 04:06:16.176273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.176453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.176466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:23.184 [2024-10-13 04:06:16.176476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:23.184 [2024-10-13 04:06:16.176483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.199640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.199669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:23.184 [2024-10-13 04:06:16.199681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.130 ms 00:15:23.184 [2024-10-13 04:06:16.199688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.222463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.222495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:23.184 [2024-10-13 04:06:16.222506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.738 ms 00:15:23.184 [2024-10-13 04:06:16.222512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.244809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.244841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:23.184 [2024-10-13 04:06:16.244852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.256 ms 00:15:23.184 [2024-10-13 04:06:16.244859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.267287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.184 [2024-10-13 04:06:16.267313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:23.184 [2024-10-13 04:06:16.267325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.333 ms 00:15:23.184 [2024-10-13 04:06:16.267332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.184 [2024-10-13 04:06:16.267370] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:23.184 [2024-10-13 04:06:16.267384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:23.184 [2024-10-13 04:06:16.267684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.267995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:23.185 [2024-10-13 04:06:16.268265] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:23.185 [2024-10-13 04:06:16.268274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 998cda62-609e-4ab9-a7b7-f81aee80cd24 00:15:23.185 [2024-10-13 04:06:16.268282] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:23.185 [2024-10-13 04:06:16.268292] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:23.185 [2024-10-13 04:06:16.268298] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:23.185 [2024-10-13 04:06:16.268307] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:23.185 [2024-10-13 04:06:16.268314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:23.185 [2024-10-13 04:06:16.268323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:23.185 [2024-10-13 04:06:16.268331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:23.185 [2024-10-13 04:06:16.268339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:23.185 [2024-10-13 04:06:16.268345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:23.185 [2024-10-13 04:06:16.268354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.185 [2024-10-13 04:06:16.268361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:23.185 [2024-10-13 04:06:16.268371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:15:23.185 [2024-10-13 04:06:16.268378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.185 [2024-10-13 04:06:16.280724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.185 [2024-10-13 04:06:16.280752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:23.185 [2024-10-13 04:06:16.280764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.306 ms 00:15:23.185 [2024-10-13 04:06:16.280773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.185 [2024-10-13 04:06:16.281128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:23.185 [2024-10-13 04:06:16.281142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:23.185 [2024-10-13 04:06:16.281152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:15:23.185 [2024-10-13 04:06:16.281160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.185 [2024-10-13 04:06:16.324678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.185 [2024-10-13 04:06:16.324711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:23.185 [2024-10-13 04:06:16.324723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.185 [2024-10-13 04:06:16.324733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.185 [2024-10-13 04:06:16.324792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.185 [2024-10-13 04:06:16.324801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:23.185 [2024-10-13 04:06:16.324810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.185 [2024-10-13 04:06:16.324817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.185 [2024-10-13 04:06:16.324904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.185 [2024-10-13 04:06:16.324914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:23.185 [2024-10-13 04:06:16.324924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.185 [2024-10-13 04:06:16.324931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.185 [2024-10-13 04:06:16.324959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.185 [2024-10-13 04:06:16.324967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:23.186 [2024-10-13 04:06:16.324976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.186 [2024-10-13 04:06:16.324984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.406047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.406093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:23.444 [2024-10-13 04:06:16.406106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.406117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.468824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.468866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:23.444 [2024-10-13 04:06:16.468878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.468886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.468957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.468966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:23.444 [2024-10-13 04:06:16.468976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.468983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.469058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.469069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:23.444 [2024-10-13 04:06:16.469079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.469087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.469184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.469194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:23.444 [2024-10-13 04:06:16.469203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.469210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.469254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.469265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:23.444 [2024-10-13 04:06:16.469277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.469284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.469325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.469333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:23.444 [2024-10-13 04:06:16.469342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.469348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.469399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:23.444 [2024-10-13 04:06:16.469410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:23.444 [2024-10-13 04:06:16.469419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:23.444 [2024-10-13 04:06:16.469426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:23.444 [2024-10-13 04:06:16.469567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.637 ms, result 0 00:15:23.444 true 00:15:23.444 04:06:16 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72679 00:15:23.444 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72679 ']' 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72679 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72679 00:15:23.445 killing process with pid 72679 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72679' 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72679 00:15:23.445 04:06:16 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72679 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:30.004 04:06:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:30.004 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:30.004 fio-3.35 00:15:30.004 Starting 1 thread 00:15:34.207 00:15:34.207 test: (groupid=0, jobs=1): err= 0: pid=72852: Sun Oct 13 04:06:27 2024 00:15:34.207 read: IOPS=1029, BW=68.4MiB/s (71.7MB/s)(255MiB/3723msec) 00:15:34.207 slat (nsec): min=2977, max=27700, avg=4759.84, stdev=2564.68 00:15:34.207 clat (usec): min=251, max=1486, avg=436.91, stdev=175.48 00:15:34.207 lat (usec): min=255, max=1490, avg=441.67, stdev=176.71 00:15:34.207 clat percentiles (usec): 00:15:34.207 | 1.00th=[ 281], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 322], 00:15:34.207 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 408], 00:15:34.207 | 70.00th=[ 506], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 898], 00:15:34.207 | 99.00th=[ 1045], 99.50th=[ 1123], 99.90th=[ 1303], 99.95th=[ 1352], 00:15:34.207 | 99.99th=[ 1483] 00:15:34.207 write: IOPS=1036, BW=68.8MiB/s (72.2MB/s)(256MiB/3720msec); 0 zone resets 00:15:34.207 slat (nsec): min=13736, max=82845, avg=19262.94, stdev=4748.61 00:15:34.207 clat (usec): min=282, max=2273, avg=493.63, stdev=231.96 00:15:34.207 lat (usec): min=304, max=2289, avg=512.90, stdev=234.54 00:15:34.207 clat percentiles (usec): 00:15:34.207 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 347], 00:15:34.207 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 437], 00:15:34.207 | 70.00th=[ 578], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 996], 00:15:34.207 | 99.00th=[ 1319], 99.50th=[ 1680], 99.90th=[ 2040], 99.95th=[ 2180], 00:15:34.207 | 99.99th=[ 2278] 00:15:34.207 bw ( KiB/s): min=48144, max=94928, per=97.82%, avg=68952.00, stdev=21578.90, samples=7 00:15:34.207 iops : min= 708, max= 1396, avg=1014.00, stdev=317.34, samples=7 00:15:34.207 lat (usec) : 500=66.43%, 750=26.00%, 1000=4.67% 00:15:34.207 lat (msec) : 2=2.85%, 4=0.05% 00:15:34.207 cpu : usr=99.11%, sys=0.08%, ctx=6, majf=0, minf=1169 00:15:34.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.207 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:34.207 00:15:34.207 Run status group 0 (all jobs): 00:15:34.207 READ: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=255MiB (267MB), run=3723-3723msec 00:15:34.207 WRITE: bw=68.8MiB/s (72.2MB/s), 68.8MiB/s-68.8MiB/s (72.2MB/s-72.2MB/s), io=256MiB (269MB), run=3720-3720msec 00:15:35.582 ----------------------------------------------------- 00:15:35.582 Suppressions used: 00:15:35.582 count bytes template 00:15:35.582 1 5 /usr/src/fio/parse.c 00:15:35.582 1 8 libtcmalloc_minimal.so 00:15:35.582 1 904 libcrypto.so 00:15:35.582 ----------------------------------------------------- 00:15:35.582 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.582 04:06:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.841 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:35.841 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:35.841 fio-3.35 00:15:35.841 Starting 2 threads 00:16:02.440 00:16:02.440 first_half: (groupid=0, jobs=1): err= 0: pid=72949: Sun Oct 13 04:06:52 2024 00:16:02.440 read: IOPS=2900, BW=11.3MiB/s (11.9MB/s)(255MiB/22477msec) 00:16:02.440 slat (usec): min=3, max=673, avg= 4.47, stdev= 3.30 00:16:02.440 clat (usec): min=550, max=288509, avg=33232.35, stdev=15279.25 00:16:02.440 lat (usec): min=554, max=288514, avg=33236.82, stdev=15279.28 00:16:02.440 clat percentiles (msec): 00:16:02.440 | 1.00th=[ 5], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 31], 00:16:02.440 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:02.440 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 41], 00:16:02.440 | 99.00th=[ 121], 99.50th=[ 138], 99.90th=[ 197], 99.95th=[ 224], 00:16:02.440 | 99.99th=[ 279] 00:16:02.440 write: IOPS=4064, BW=15.9MiB/s (16.6MB/s)(256MiB/16123msec); 0 zone resets 00:16:02.440 slat (usec): min=3, max=2319, avg= 6.10, stdev=10.97 00:16:02.440 clat (usec): min=372, max=89431, avg=10823.93, stdev=19245.86 00:16:02.440 lat (usec): min=384, max=89438, avg=10830.03, stdev=19245.93 00:16:02.440 clat percentiles (usec): 00:16:02.440 | 1.00th=[ 652], 5.00th=[ 758], 10.00th=[ 881], 20.00th=[ 1090], 00:16:02.440 | 30.00th=[ 1270], 40.00th=[ 1795], 50.00th=[ 3621], 60.00th=[ 5080], 00:16:02.440 | 70.00th=[ 5932], 80.00th=[10683], 90.00th=[56361], 95.00th=[63701], 00:16:02.440 | 99.00th=[73925], 99.50th=[77071], 99.90th=[87557], 99.95th=[87557], 00:16:02.440 | 99.99th=[88605] 00:16:02.440 bw ( KiB/s): min= 824, max=54448, per=81.89%, avg=23831.27, stdev=16253.39, samples=22 00:16:02.440 iops : min= 206, max=13612, avg=5957.82, stdev=4063.35, samples=22 00:16:02.440 lat (usec) : 500=0.02%, 750=2.31%, 1000=5.51% 00:16:02.440 lat (msec) : 2=13.22%, 4=5.47%, 10=13.21%, 20=5.63%, 50=47.61% 00:16:02.440 lat (msec) : 100=6.31%, 250=0.70%, 500=0.02% 00:16:02.440 cpu : usr=98.74%, sys=0.36%, ctx=111, majf=0, minf=5611 00:16:02.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:02.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.440 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.440 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.440 second_half: (groupid=0, jobs=1): err= 0: pid=72950: Sun Oct 13 04:06:52 2024 00:16:02.440 read: IOPS=2884, BW=11.3MiB/s (11.8MB/s)(255MiB/22605msec) 00:16:02.440 slat (nsec): min=3057, max=87390, avg=5252.45, stdev=1369.87 00:16:02.440 clat (usec): min=585, max=292135, avg=32377.00, stdev=14628.67 00:16:02.440 lat (usec): min=589, max=292140, avg=32382.25, stdev=14628.83 00:16:02.440 clat percentiles (msec): 00:16:02.440 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 31], 00:16:02.440 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:16:02.440 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 40], 00:16:02.440 | 99.00th=[ 113], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 199], 00:16:02.440 | 99.99th=[ 288] 00:16:02.440 write: IOPS=3637, BW=14.2MiB/s (14.9MB/s)(256MiB/18016msec); 0 zone resets 00:16:02.440 slat (usec): min=3, max=2181, avg= 7.24, stdev=10.34 00:16:02.440 clat (usec): min=358, max=90160, avg=11914.02, stdev=19652.21 00:16:02.440 lat (usec): min=370, max=90171, avg=11921.26, stdev=19652.33 00:16:02.440 clat percentiles (usec): 00:16:02.440 | 1.00th=[ 652], 5.00th=[ 750], 10.00th=[ 865], 20.00th=[ 1090], 00:16:02.440 | 30.00th=[ 1336], 40.00th=[ 2868], 50.00th=[ 4146], 60.00th=[ 5473], 00:16:02.440 | 70.00th=[ 8586], 80.00th=[11994], 90.00th=[56361], 95.00th=[64226], 00:16:02.440 | 99.00th=[76022], 99.50th=[79168], 99.90th=[87557], 99.95th=[88605], 00:16:02.440 | 99.99th=[89654] 00:16:02.440 bw ( KiB/s): min= 192, max=40504, per=75.07%, avg=21845.33, stdev=13607.82, samples=24 00:16:02.440 iops : min= 48, max=10126, avg=5461.33, stdev=3401.96, samples=24 00:16:02.440 lat (usec) : 500=0.03%, 750=2.52%, 1000=5.41% 00:16:02.440 lat (msec) : 2=10.13%, 4=6.81%, 10=14.04%, 20=6.32%, 50=47.82% 00:16:02.440 lat (msec) : 100=6.32%, 250=0.59%, 500=0.01% 00:16:02.440 cpu : usr=99.16%, sys=0.17%, ctx=339, majf=0, minf=5488 00:16:02.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:02.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.440 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.440 issued rwts: total=65212,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.440 00:16:02.440 Run status group 0 (all jobs): 00:16:02.440 READ: bw=22.5MiB/s (23.6MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.9MB/s), io=509MiB (534MB), run=22477-22605msec 00:16:02.440 WRITE: bw=28.4MiB/s (29.8MB/s), 14.2MiB/s-15.9MiB/s (14.9MB/s-16.6MB/s), io=512MiB (537MB), run=16123-18016msec 00:16:02.440 ----------------------------------------------------- 00:16:02.440 Suppressions used: 00:16:02.440 count bytes template 00:16:02.440 2 10 /usr/src/fio/parse.c 00:16:02.440 1 96 /usr/src/fio/iolog.c 00:16:02.440 1 8 libtcmalloc_minimal.so 00:16:02.440 1 904 libcrypto.so 00:16:02.440 ----------------------------------------------------- 00:16:02.440 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:02.440 04:06:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:02.440 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:02.440 fio-3.35 00:16:02.440 Starting 1 thread 00:16:14.714 00:16:14.714 test: (groupid=0, jobs=1): err= 0: pid=73248: Sun Oct 13 04:07:07 2024 00:16:14.714 read: IOPS=7947, BW=31.0MiB/s (32.6MB/s)(255MiB/8204msec) 00:16:14.714 slat (nsec): min=3030, max=41577, avg=4210.19, stdev=1039.85 00:16:14.714 clat (usec): min=553, max=31309, avg=16096.81, stdev=1839.93 00:16:14.714 lat (usec): min=557, max=31312, avg=16101.02, stdev=1839.98 00:16:14.714 clat percentiles (usec): 00:16:14.714 | 1.00th=[14746], 5.00th=[15008], 10.00th=[15139], 20.00th=[15270], 00:16:14.714 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15664], 00:16:14.714 | 70.00th=[15795], 80.00th=[15926], 90.00th=[17957], 95.00th=[20841], 00:16:14.714 | 99.00th=[24249], 99.50th=[24773], 99.90th=[26608], 99.95th=[27132], 00:16:14.714 | 99.99th=[30540] 00:16:14.714 write: IOPS=17.0k, BW=66.2MiB/s (69.4MB/s)(256MiB/3866msec); 0 zone resets 00:16:14.714 slat (usec): min=4, max=162, avg= 6.95, stdev= 2.61 00:16:14.714 clat (usec): min=462, max=46131, avg=7510.27, stdev=9369.29 00:16:14.714 lat (usec): min=468, max=46137, avg=7517.22, stdev=9369.32 00:16:14.714 clat percentiles (usec): 00:16:14.714 | 1.00th=[ 603], 5.00th=[ 685], 10.00th=[ 742], 20.00th=[ 857], 00:16:14.714 | 30.00th=[ 1004], 40.00th=[ 1319], 50.00th=[ 5211], 60.00th=[ 5800], 00:16:14.714 | 70.00th=[ 6718], 80.00th=[ 8029], 90.00th=[27132], 95.00th=[28967], 00:16:14.714 | 99.00th=[32375], 99.50th=[34341], 99.90th=[38011], 99.95th=[38536], 00:16:14.714 | 99.99th=[44827] 00:16:14.714 bw ( KiB/s): min=41976, max=86304, per=96.65%, avg=65536.00, stdev=14373.66, samples=8 00:16:14.714 iops : min=10494, max=21576, avg=16384.00, stdev=3593.42, samples=8 00:16:14.714 lat (usec) : 500=0.01%, 750=5.27%, 1000=9.58% 00:16:14.714 lat (msec) : 2=5.85%, 4=0.47%, 10=20.76%, 20=47.23%, 50=10.83% 00:16:14.714 cpu : usr=99.12%, sys=0.17%, ctx=25, majf=0, minf=5565 00:16:14.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:14.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.714 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:14.714 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:14.714 00:16:14.714 Run status group 0 (all jobs): 00:16:14.714 READ: bw=31.0MiB/s (32.6MB/s), 31.0MiB/s-31.0MiB/s (32.6MB/s-32.6MB/s), io=255MiB (267MB), run=8204-8204msec 00:16:14.714 WRITE: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=256MiB (268MB), run=3866-3866msec 00:16:16.103 ----------------------------------------------------- 00:16:16.103 Suppressions used: 00:16:16.103 count bytes template 00:16:16.103 1 5 /usr/src/fio/parse.c 00:16:16.103 2 192 /usr/src/fio/iolog.c 00:16:16.103 1 8 libtcmalloc_minimal.so 00:16:16.103 1 904 libcrypto.so 00:16:16.103 ----------------------------------------------------- 00:16:16.103 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:16.103 Remove shared memory files 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57502 /dev/shm/spdk_tgt_trace.pid71591 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:16.103 00:16:16.103 real 1m0.100s 00:16:16.103 user 2m11.224s 00:16:16.103 sys 0m2.696s 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.103 04:07:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.103 ************************************ 00:16:16.103 END TEST ftl_fio_basic 00:16:16.103 ************************************ 00:16:16.388 04:07:09 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:16.388 04:07:09 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:16.388 04:07:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.388 04:07:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:16.388 ************************************ 00:16:16.388 START TEST ftl_bdevperf 00:16:16.388 ************************************ 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:16.388 * Looking for test storage... 00:16:16.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:16.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.388 --rc genhtml_branch_coverage=1 00:16:16.388 --rc genhtml_function_coverage=1 00:16:16.388 --rc genhtml_legend=1 00:16:16.388 --rc geninfo_all_blocks=1 00:16:16.388 --rc geninfo_unexecuted_blocks=1 00:16:16.388 00:16:16.388 ' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:16.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.388 --rc genhtml_branch_coverage=1 00:16:16.388 --rc genhtml_function_coverage=1 00:16:16.388 --rc genhtml_legend=1 00:16:16.388 --rc geninfo_all_blocks=1 00:16:16.388 --rc geninfo_unexecuted_blocks=1 00:16:16.388 00:16:16.388 ' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:16.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.388 --rc genhtml_branch_coverage=1 00:16:16.388 --rc genhtml_function_coverage=1 00:16:16.388 --rc genhtml_legend=1 00:16:16.388 --rc geninfo_all_blocks=1 00:16:16.388 --rc geninfo_unexecuted_blocks=1 00:16:16.388 00:16:16.388 ' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:16.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.388 --rc genhtml_branch_coverage=1 00:16:16.388 --rc genhtml_function_coverage=1 00:16:16.388 --rc genhtml_legend=1 00:16:16.388 --rc geninfo_all_blocks=1 00:16:16.388 --rc geninfo_unexecuted_blocks=1 00:16:16.388 00:16:16.388 ' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73475 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:16.388 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:16.389 04:07:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73475 00:16:16.389 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73475 ']' 00:16:16.389 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.389 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.389 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.389 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.389 04:07:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:16.389 [2024-10-13 04:07:09.510678] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:16:16.389 [2024-10-13 04:07:09.510798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73475 ] 00:16:16.648 [2024-10-13 04:07:09.660786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.648 [2024-10-13 04:07:09.754755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:17.214 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:17.472 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:17.730 { 00:16:17.730 "name": "nvme0n1", 00:16:17.730 "aliases": [ 00:16:17.730 "cdc2377b-1880-4c0c-8f80-39ffc519515c" 00:16:17.730 ], 00:16:17.730 "product_name": "NVMe disk", 00:16:17.730 "block_size": 4096, 00:16:17.730 "num_blocks": 1310720, 00:16:17.730 "uuid": "cdc2377b-1880-4c0c-8f80-39ffc519515c", 00:16:17.730 "numa_id": -1, 00:16:17.730 "assigned_rate_limits": { 00:16:17.730 "rw_ios_per_sec": 0, 00:16:17.730 "rw_mbytes_per_sec": 0, 00:16:17.730 "r_mbytes_per_sec": 0, 00:16:17.730 "w_mbytes_per_sec": 0 00:16:17.730 }, 00:16:17.730 "claimed": true, 00:16:17.730 "claim_type": "read_many_write_one", 00:16:17.730 "zoned": false, 00:16:17.730 "supported_io_types": { 00:16:17.730 "read": true, 00:16:17.730 "write": true, 00:16:17.730 "unmap": true, 00:16:17.730 "flush": true, 00:16:17.730 "reset": true, 00:16:17.730 "nvme_admin": true, 00:16:17.730 "nvme_io": true, 00:16:17.730 "nvme_io_md": false, 00:16:17.730 "write_zeroes": true, 00:16:17.730 "zcopy": false, 00:16:17.730 "get_zone_info": false, 00:16:17.730 "zone_management": false, 00:16:17.730 "zone_append": false, 00:16:17.730 "compare": true, 00:16:17.730 "compare_and_write": false, 00:16:17.730 "abort": true, 00:16:17.730 "seek_hole": false, 00:16:17.730 "seek_data": false, 00:16:17.730 "copy": true, 00:16:17.730 "nvme_iov_md": false 00:16:17.730 }, 00:16:17.730 "driver_specific": { 00:16:17.730 "nvme": [ 00:16:17.730 { 00:16:17.730 "pci_address": "0000:00:11.0", 00:16:17.730 "trid": { 00:16:17.730 "trtype": "PCIe", 00:16:17.730 "traddr": "0000:00:11.0" 00:16:17.730 }, 00:16:17.730 "ctrlr_data": { 00:16:17.730 "cntlid": 0, 00:16:17.730 "vendor_id": "0x1b36", 00:16:17.730 "model_number": "QEMU NVMe Ctrl", 00:16:17.730 "serial_number": "12341", 00:16:17.730 "firmware_revision": "8.0.0", 00:16:17.730 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:17.730 "oacs": { 00:16:17.730 "security": 0, 00:16:17.730 "format": 1, 00:16:17.730 "firmware": 0, 00:16:17.730 "ns_manage": 1 00:16:17.730 }, 00:16:17.730 "multi_ctrlr": false, 00:16:17.730 "ana_reporting": false 00:16:17.730 }, 00:16:17.730 "vs": { 00:16:17.730 "nvme_version": "1.4" 00:16:17.730 }, 00:16:17.730 "ns_data": { 00:16:17.730 "id": 1, 00:16:17.730 "can_share": false 00:16:17.730 } 00:16:17.730 } 00:16:17.730 ], 00:16:17.730 "mp_policy": "active_passive" 00:16:17.730 } 00:16:17.730 } 00:16:17.730 ]' 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:17.730 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:17.989 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=149da3ed-ccf0-492d-be01-74feff88cf0a 00:16:17.989 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:17.989 04:07:10 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 149da3ed-ccf0-492d-be01-74feff88cf0a 00:16:18.246 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:18.246 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=13181370-6e7b-4730-95ff-d7d74a24bf2d 00:16:18.246 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 13181370-6e7b-4730-95ff-d7d74a24bf2d 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=c645b067-c7f4-4b97-b445-0977520059b7 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c645b067-c7f4-4b97-b445-0977520059b7 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c645b067-c7f4-4b97-b445-0977520059b7 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c645b067-c7f4-4b97-b445-0977520059b7 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c645b067-c7f4-4b97-b445-0977520059b7 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:18.503 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:18.504 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c645b067-c7f4-4b97-b445-0977520059b7 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:18.761 { 00:16:18.761 "name": "c645b067-c7f4-4b97-b445-0977520059b7", 00:16:18.761 "aliases": [ 00:16:18.761 "lvs/nvme0n1p0" 00:16:18.761 ], 00:16:18.761 "product_name": "Logical Volume", 00:16:18.761 "block_size": 4096, 00:16:18.761 "num_blocks": 26476544, 00:16:18.761 "uuid": "c645b067-c7f4-4b97-b445-0977520059b7", 00:16:18.761 "assigned_rate_limits": { 00:16:18.761 "rw_ios_per_sec": 0, 00:16:18.761 "rw_mbytes_per_sec": 0, 00:16:18.761 "r_mbytes_per_sec": 0, 00:16:18.761 "w_mbytes_per_sec": 0 00:16:18.761 }, 00:16:18.761 "claimed": false, 00:16:18.761 "zoned": false, 00:16:18.761 "supported_io_types": { 00:16:18.761 "read": true, 00:16:18.761 "write": true, 00:16:18.761 "unmap": true, 00:16:18.761 "flush": false, 00:16:18.761 "reset": true, 00:16:18.761 "nvme_admin": false, 00:16:18.761 "nvme_io": false, 00:16:18.761 "nvme_io_md": false, 00:16:18.761 "write_zeroes": true, 00:16:18.761 "zcopy": false, 00:16:18.761 "get_zone_info": false, 00:16:18.761 "zone_management": false, 00:16:18.761 "zone_append": false, 00:16:18.761 "compare": false, 00:16:18.761 "compare_and_write": false, 00:16:18.761 "abort": false, 00:16:18.761 "seek_hole": true, 00:16:18.761 "seek_data": true, 00:16:18.761 "copy": false, 00:16:18.761 "nvme_iov_md": false 00:16:18.761 }, 00:16:18.761 "driver_specific": { 00:16:18.761 "lvol": { 00:16:18.761 "lvol_store_uuid": "13181370-6e7b-4730-95ff-d7d74a24bf2d", 00:16:18.761 "base_bdev": "nvme0n1", 00:16:18.761 "thin_provision": true, 00:16:18.761 "num_allocated_clusters": 0, 00:16:18.761 "snapshot": false, 00:16:18.761 "clone": false, 00:16:18.761 "esnap_clone": false 00:16:18.761 } 00:16:18.761 } 00:16:18.761 } 00:16:18.761 ]' 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:18.761 04:07:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c645b067-c7f4-4b97-b445-0977520059b7 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c645b067-c7f4-4b97-b445-0977520059b7 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:19.019 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c645b067-c7f4-4b97-b445-0977520059b7 00:16:19.277 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:19.277 { 00:16:19.277 "name": "c645b067-c7f4-4b97-b445-0977520059b7", 00:16:19.277 "aliases": [ 00:16:19.277 "lvs/nvme0n1p0" 00:16:19.277 ], 00:16:19.278 "product_name": "Logical Volume", 00:16:19.278 "block_size": 4096, 00:16:19.278 "num_blocks": 26476544, 00:16:19.278 "uuid": "c645b067-c7f4-4b97-b445-0977520059b7", 00:16:19.278 "assigned_rate_limits": { 00:16:19.278 "rw_ios_per_sec": 0, 00:16:19.278 "rw_mbytes_per_sec": 0, 00:16:19.278 "r_mbytes_per_sec": 0, 00:16:19.278 "w_mbytes_per_sec": 0 00:16:19.278 }, 00:16:19.278 "claimed": false, 00:16:19.278 "zoned": false, 00:16:19.278 "supported_io_types": { 00:16:19.278 "read": true, 00:16:19.278 "write": true, 00:16:19.278 "unmap": true, 00:16:19.278 "flush": false, 00:16:19.278 "reset": true, 00:16:19.278 "nvme_admin": false, 00:16:19.278 "nvme_io": false, 00:16:19.278 "nvme_io_md": false, 00:16:19.278 "write_zeroes": true, 00:16:19.278 "zcopy": false, 00:16:19.278 "get_zone_info": false, 00:16:19.278 "zone_management": false, 00:16:19.278 "zone_append": false, 00:16:19.278 "compare": false, 00:16:19.278 "compare_and_write": false, 00:16:19.278 "abort": false, 00:16:19.278 "seek_hole": true, 00:16:19.278 "seek_data": true, 00:16:19.278 "copy": false, 00:16:19.278 "nvme_iov_md": false 00:16:19.278 }, 00:16:19.278 "driver_specific": { 00:16:19.278 "lvol": { 00:16:19.278 "lvol_store_uuid": "13181370-6e7b-4730-95ff-d7d74a24bf2d", 00:16:19.278 "base_bdev": "nvme0n1", 00:16:19.278 "thin_provision": true, 00:16:19.278 "num_allocated_clusters": 0, 00:16:19.278 "snapshot": false, 00:16:19.278 "clone": false, 00:16:19.278 "esnap_clone": false 00:16:19.278 } 00:16:19.278 } 00:16:19.278 } 00:16:19.278 ]' 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:19.278 04:07:12 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:19.535 04:07:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:19.535 04:07:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size c645b067-c7f4-4b97-b445-0977520059b7 00:16:19.535 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c645b067-c7f4-4b97-b445-0977520059b7 00:16:19.535 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:19.535 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:19.535 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:19.535 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c645b067-c7f4-4b97-b445-0977520059b7 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:19.794 { 00:16:19.794 "name": "c645b067-c7f4-4b97-b445-0977520059b7", 00:16:19.794 "aliases": [ 00:16:19.794 "lvs/nvme0n1p0" 00:16:19.794 ], 00:16:19.794 "product_name": "Logical Volume", 00:16:19.794 "block_size": 4096, 00:16:19.794 "num_blocks": 26476544, 00:16:19.794 "uuid": "c645b067-c7f4-4b97-b445-0977520059b7", 00:16:19.794 "assigned_rate_limits": { 00:16:19.794 "rw_ios_per_sec": 0, 00:16:19.794 "rw_mbytes_per_sec": 0, 00:16:19.794 "r_mbytes_per_sec": 0, 00:16:19.794 "w_mbytes_per_sec": 0 00:16:19.794 }, 00:16:19.794 "claimed": false, 00:16:19.794 "zoned": false, 00:16:19.794 "supported_io_types": { 00:16:19.794 "read": true, 00:16:19.794 "write": true, 00:16:19.794 "unmap": true, 00:16:19.794 "flush": false, 00:16:19.794 "reset": true, 00:16:19.794 "nvme_admin": false, 00:16:19.794 "nvme_io": false, 00:16:19.794 "nvme_io_md": false, 00:16:19.794 "write_zeroes": true, 00:16:19.794 "zcopy": false, 00:16:19.794 "get_zone_info": false, 00:16:19.794 "zone_management": false, 00:16:19.794 "zone_append": false, 00:16:19.794 "compare": false, 00:16:19.794 "compare_and_write": false, 00:16:19.794 "abort": false, 00:16:19.794 "seek_hole": true, 00:16:19.794 "seek_data": true, 00:16:19.794 "copy": false, 00:16:19.794 "nvme_iov_md": false 00:16:19.794 }, 00:16:19.794 "driver_specific": { 00:16:19.794 "lvol": { 00:16:19.794 "lvol_store_uuid": "13181370-6e7b-4730-95ff-d7d74a24bf2d", 00:16:19.794 "base_bdev": "nvme0n1", 00:16:19.794 "thin_provision": true, 00:16:19.794 "num_allocated_clusters": 0, 00:16:19.794 "snapshot": false, 00:16:19.794 "clone": false, 00:16:19.794 "esnap_clone": false 00:16:19.794 } 00:16:19.794 } 00:16:19.794 } 00:16:19.794 ]' 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:19.794 04:07:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c645b067-c7f4-4b97-b445-0977520059b7 -c nvc0n1p0 --l2p_dram_limit 20 00:16:19.794 [2024-10-13 04:07:12.904444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.794 [2024-10-13 04:07:12.904485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:19.794 [2024-10-13 04:07:12.904496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:19.794 [2024-10-13 04:07:12.904505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.794 [2024-10-13 04:07:12.904545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.794 [2024-10-13 04:07:12.904554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.794 [2024-10-13 04:07:12.904561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:19.794 [2024-10-13 04:07:12.904583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.794 [2024-10-13 04:07:12.904596] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:19.794 [2024-10-13 04:07:12.905231] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:19.794 [2024-10-13 04:07:12.905249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.794 [2024-10-13 04:07:12.905258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.794 [2024-10-13 04:07:12.905265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:16:19.794 [2024-10-13 04:07:12.905272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.794 [2024-10-13 04:07:12.905396] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 66076d0c-14e8-4811-b24a-d12e8ee73f6c 00:16:19.795 [2024-10-13 04:07:12.906337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.906365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:19.795 [2024-10-13 04:07:12.906374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:19.795 [2024-10-13 04:07:12.906383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.911151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.911180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:19.795 [2024-10-13 04:07:12.911189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:16:19.795 [2024-10-13 04:07:12.911195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.911259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.911267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:19.795 [2024-10-13 04:07:12.911279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:19.795 [2024-10-13 04:07:12.911284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.911313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.911319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:19.795 [2024-10-13 04:07:12.911327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:19.795 [2024-10-13 04:07:12.911332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.911354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:19.795 [2024-10-13 04:07:12.914208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.914234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:19.795 [2024-10-13 04:07:12.914241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:16:19.795 [2024-10-13 04:07:12.914250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.914273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.914280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:19.795 [2024-10-13 04:07:12.914288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:19.795 [2024-10-13 04:07:12.914295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.914311] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:19.795 [2024-10-13 04:07:12.914417] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:19.795 [2024-10-13 04:07:12.914429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:19.795 [2024-10-13 04:07:12.914440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:19.795 [2024-10-13 04:07:12.914448] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914456] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914462] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:19.795 [2024-10-13 04:07:12.914469] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:19.795 [2024-10-13 04:07:12.914475] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:19.795 [2024-10-13 04:07:12.914482] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:19.795 [2024-10-13 04:07:12.914488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.914494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:19.795 [2024-10-13 04:07:12.914500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:16:19.795 [2024-10-13 04:07:12.914508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.914569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.795 [2024-10-13 04:07:12.914578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:19.795 [2024-10-13 04:07:12.914584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:19.795 [2024-10-13 04:07:12.914591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.795 [2024-10-13 04:07:12.914672] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:19.795 [2024-10-13 04:07:12.914681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:19.795 [2024-10-13 04:07:12.914687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:19.795 [2024-10-13 04:07:12.914706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:19.795 [2024-10-13 04:07:12.914724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:19.795 [2024-10-13 04:07:12.914735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:19.795 [2024-10-13 04:07:12.914743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:19.795 [2024-10-13 04:07:12.914748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:19.795 [2024-10-13 04:07:12.914759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:19.795 [2024-10-13 04:07:12.914766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:19.795 [2024-10-13 04:07:12.914774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:19.795 [2024-10-13 04:07:12.914785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:19.795 [2024-10-13 04:07:12.914802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:19.795 [2024-10-13 04:07:12.914820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:19.795 [2024-10-13 04:07:12.914836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:19.795 [2024-10-13 04:07:12.914853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:19.795 [2024-10-13 04:07:12.914870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:19.795 [2024-10-13 04:07:12.914881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:19.795 [2024-10-13 04:07:12.914887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:19.795 [2024-10-13 04:07:12.914892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:19.795 [2024-10-13 04:07:12.914898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:19.795 [2024-10-13 04:07:12.914903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:19.795 [2024-10-13 04:07:12.914909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:19.795 [2024-10-13 04:07:12.914920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:19.795 [2024-10-13 04:07:12.914925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:19.795 [2024-10-13 04:07:12.914938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:19.795 [2024-10-13 04:07:12.914945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:19.795 [2024-10-13 04:07:12.914960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:19.795 [2024-10-13 04:07:12.914966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:19.795 [2024-10-13 04:07:12.914972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:19.795 [2024-10-13 04:07:12.914977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:19.795 [2024-10-13 04:07:12.914983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:19.795 [2024-10-13 04:07:12.914989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:19.795 [2024-10-13 04:07:12.914997] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:19.795 [2024-10-13 04:07:12.915004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:19.795 [2024-10-13 04:07:12.915012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:19.795 [2024-10-13 04:07:12.915017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:19.795 [2024-10-13 04:07:12.915024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:19.795 [2024-10-13 04:07:12.915029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:19.795 [2024-10-13 04:07:12.915036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:19.795 [2024-10-13 04:07:12.915041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:19.795 [2024-10-13 04:07:12.915047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:19.795 [2024-10-13 04:07:12.915053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:19.795 [2024-10-13 04:07:12.915060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:19.795 [2024-10-13 04:07:12.915066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:19.796 [2024-10-13 04:07:12.915072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:19.796 [2024-10-13 04:07:12.915077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:19.796 [2024-10-13 04:07:12.915084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:19.796 [2024-10-13 04:07:12.915089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:19.796 [2024-10-13 04:07:12.915096] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:19.796 [2024-10-13 04:07:12.915102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:19.796 [2024-10-13 04:07:12.915110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:19.796 [2024-10-13 04:07:12.915116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:19.796 [2024-10-13 04:07:12.915123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:19.796 [2024-10-13 04:07:12.915129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:19.796 [2024-10-13 04:07:12.915135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.796 [2024-10-13 04:07:12.915141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:19.796 [2024-10-13 04:07:12.915148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:16:19.796 [2024-10-13 04:07:12.915156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.796 [2024-10-13 04:07:12.915181] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:19.796 [2024-10-13 04:07:12.915188] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:22.325 [2024-10-13 04:07:15.167312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.167367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:22.325 [2024-10-13 04:07:15.167383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2252.117 ms 00:16:22.325 [2024-10-13 04:07:15.167391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.192747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.192797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.325 [2024-10-13 04:07:15.192813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.155 ms 00:16:22.325 [2024-10-13 04:07:15.192821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.192953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.192963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:22.325 [2024-10-13 04:07:15.192978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:22.325 [2024-10-13 04:07:15.192986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.243320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.243369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.325 [2024-10-13 04:07:15.243386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.284 ms 00:16:22.325 [2024-10-13 04:07:15.243394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.243436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.243445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.325 [2024-10-13 04:07:15.243455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:22.325 [2024-10-13 04:07:15.243465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.243847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.243866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.325 [2024-10-13 04:07:15.243877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:16:22.325 [2024-10-13 04:07:15.243885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.244010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.244019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.325 [2024-10-13 04:07:15.244032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:22.325 [2024-10-13 04:07:15.244039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.257012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.257042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.325 [2024-10-13 04:07:15.257054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.957 ms 00:16:22.325 [2024-10-13 04:07:15.257062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.268252] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:22.325 [2024-10-13 04:07:15.273197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.273232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:22.325 [2024-10-13 04:07:15.273243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.068 ms 00:16:22.325 [2024-10-13 04:07:15.273252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.331846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.331895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:22.325 [2024-10-13 04:07:15.331906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.575 ms 00:16:22.325 [2024-10-13 04:07:15.331916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.332093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.332107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:22.325 [2024-10-13 04:07:15.332115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:22.325 [2024-10-13 04:07:15.332124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.355038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.355074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:22.325 [2024-10-13 04:07:15.355085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.873 ms 00:16:22.325 [2024-10-13 04:07:15.355094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.377492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.377543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:22.325 [2024-10-13 04:07:15.377554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.366 ms 00:16:22.325 [2024-10-13 04:07:15.377563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.378150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.378217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:22.325 [2024-10-13 04:07:15.378229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:16:22.325 [2024-10-13 04:07:15.378238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.448876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.449053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:22.325 [2024-10-13 04:07:15.449075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.602 ms 00:16:22.325 [2024-10-13 04:07:15.449086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.325 [2024-10-13 04:07:15.473095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.325 [2024-10-13 04:07:15.473153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:22.326 [2024-10-13 04:07:15.473166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.932 ms 00:16:22.326 [2024-10-13 04:07:15.473176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.584 [2024-10-13 04:07:15.496830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.584 [2024-10-13 04:07:15.496868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:22.584 [2024-10-13 04:07:15.496878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.613 ms 00:16:22.584 [2024-10-13 04:07:15.496887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.584 [2024-10-13 04:07:15.519536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.584 [2024-10-13 04:07:15.519570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:22.584 [2024-10-13 04:07:15.519580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.619 ms 00:16:22.584 [2024-10-13 04:07:15.519590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.584 [2024-10-13 04:07:15.519634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.584 [2024-10-13 04:07:15.519662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:22.584 [2024-10-13 04:07:15.519671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:22.584 [2024-10-13 04:07:15.519679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.584 [2024-10-13 04:07:15.519752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.584 [2024-10-13 04:07:15.519765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:22.584 [2024-10-13 04:07:15.519772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:22.584 [2024-10-13 04:07:15.519781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.584 [2024-10-13 04:07:15.520931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2616.049 ms, result 0 00:16:22.584 { 00:16:22.584 "name": "ftl0", 00:16:22.584 "uuid": "66076d0c-14e8-4811-b24a-d12e8ee73f6c" 00:16:22.584 } 00:16:22.584 04:07:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:22.584 04:07:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:22.584 04:07:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:22.841 04:07:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:22.841 [2024-10-13 04:07:15.853001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:22.841 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:22.841 Zero copy mechanism will not be used. 00:16:22.841 Running I/O for 4 seconds... 00:16:24.707 2609.00 IOPS, 173.25 MiB/s [2024-10-13T04:07:19.254Z] 2047.00 IOPS, 135.93 MiB/s [2024-10-13T04:07:20.190Z] 1889.00 IOPS, 125.44 MiB/s [2024-10-13T04:07:20.190Z] 1791.25 IOPS, 118.95 MiB/s 00:16:27.030 Latency(us) 00:16:27.030 [2024-10-13T04:07:20.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.030 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:27.030 ftl0 : 4.00 1790.70 118.91 0.00 0.00 585.72 170.93 2192.94 00:16:27.030 [2024-10-13T04:07:20.190Z] =================================================================================================================== 00:16:27.030 [2024-10-13T04:07:20.190Z] Total : 1790.70 118.91 0.00 0.00 585.72 170.93 2192.94 00:16:27.030 [2024-10-13 04:07:19.862888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:27.030 { 00:16:27.030 "results": [ 00:16:27.030 { 00:16:27.030 "job": "ftl0", 00:16:27.030 "core_mask": "0x1", 00:16:27.030 "workload": "randwrite", 00:16:27.030 "status": "finished", 00:16:27.030 "queue_depth": 1, 00:16:27.030 "io_size": 69632, 00:16:27.030 "runtime": 4.001798, 00:16:27.030 "iops": 1790.6950825603892, 00:16:27.030 "mibps": 118.91334532627585, 00:16:27.030 "io_failed": 0, 00:16:27.030 "io_timeout": 0, 00:16:27.030 "avg_latency_us": 585.7246913845296, 00:16:27.030 "min_latency_us": 170.92923076923077, 00:16:27.030 "max_latency_us": 2192.9353846153845 00:16:27.030 } 00:16:27.030 ], 00:16:27.030 "core_count": 1 00:16:27.030 } 00:16:27.030 04:07:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:27.030 [2024-10-13 04:07:19.969935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:27.030 Running I/O for 4 seconds... 00:16:28.898 7297.00 IOPS, 28.50 MiB/s [2024-10-13T04:07:22.993Z] 6740.50 IOPS, 26.33 MiB/s [2024-10-13T04:07:24.367Z] 6595.33 IOPS, 25.76 MiB/s [2024-10-13T04:07:24.367Z] 7703.50 IOPS, 30.09 MiB/s 00:16:31.207 Latency(us) 00:16:31.207 [2024-10-13T04:07:24.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.207 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.207 ftl0 : 4.01 7708.66 30.11 0.00 0.00 16575.07 230.01 45774.38 00:16:31.207 [2024-10-13T04:07:24.367Z] =================================================================================================================== 00:16:31.207 [2024-10-13T04:07:24.367Z] Total : 7708.66 30.11 0.00 0.00 16575.07 0.00 45774.38 00:16:31.207 { 00:16:31.207 "results": [ 00:16:31.207 { 00:16:31.207 "job": "ftl0", 00:16:31.207 "core_mask": "0x1", 00:16:31.207 "workload": "randwrite", 00:16:31.207 "status": "finished", 00:16:31.207 "queue_depth": 128, 00:16:31.207 "io_size": 4096, 00:16:31.207 "runtime": 4.013795, 00:16:31.207 "iops": 7708.664742469408, 00:16:31.207 "mibps": 30.111971650271126, 00:16:31.207 "io_failed": 0, 00:16:31.207 "io_timeout": 0, 00:16:31.207 "avg_latency_us": 16575.072524830135, 00:16:31.207 "min_latency_us": 230.00615384615384, 00:16:31.207 "max_latency_us": 45774.375384615385 00:16:31.207 } 00:16:31.207 ], 00:16:31.207 "core_count": 1 00:16:31.207 } 00:16:31.207 [2024-10-13 04:07:23.992721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:31.207 04:07:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:31.207 [2024-10-13 04:07:24.099393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:31.207 Running I/O for 4 seconds... 00:16:33.076 8883.00 IOPS, 34.70 MiB/s [2024-10-13T04:07:27.171Z] 7132.00 IOPS, 27.86 MiB/s [2024-10-13T04:07:28.547Z] 6542.67 IOPS, 25.56 MiB/s [2024-10-13T04:07:28.547Z] 6215.00 IOPS, 24.28 MiB/s 00:16:35.387 Latency(us) 00:16:35.387 [2024-10-13T04:07:28.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.387 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:35.387 Verification LBA range: start 0x0 length 0x1400000 00:16:35.387 ftl0 : 4.02 6221.16 24.30 0.00 0.00 20507.48 266.24 38111.70 00:16:35.387 [2024-10-13T04:07:28.547Z] =================================================================================================================== 00:16:35.387 [2024-10-13T04:07:28.547Z] Total : 6221.16 24.30 0.00 0.00 20507.48 0.00 38111.70 00:16:35.387 [2024-10-13 04:07:28.130660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:35.387 { 00:16:35.387 "results": [ 00:16:35.387 { 00:16:35.387 "job": "ftl0", 00:16:35.387 "core_mask": "0x1", 00:16:35.387 "workload": "verify", 00:16:35.387 "status": "finished", 00:16:35.387 "verify_range": { 00:16:35.387 "start": 0, 00:16:35.387 "length": 20971520 00:16:35.387 }, 00:16:35.387 "queue_depth": 128, 00:16:35.387 "io_size": 4096, 00:16:35.387 "runtime": 4.015004, 00:16:35.387 "iops": 6221.164412289502, 00:16:35.387 "mibps": 24.301423485505868, 00:16:35.387 "io_failed": 0, 00:16:35.387 "io_timeout": 0, 00:16:35.387 "avg_latency_us": 20507.484841922433, 00:16:35.387 "min_latency_us": 266.24, 00:16:35.387 "max_latency_us": 38111.70461538462 00:16:35.387 } 00:16:35.387 ], 00:16:35.387 "core_count": 1 00:16:35.387 } 00:16:35.387 04:07:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:35.387 [2024-10-13 04:07:28.332802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.387 [2024-10-13 04:07:28.332845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.387 [2024-10-13 04:07:28.332858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:35.387 [2024-10-13 04:07:28.332868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.387 [2024-10-13 04:07:28.332890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.387 [2024-10-13 04:07:28.335486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.387 [2024-10-13 04:07:28.335517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.387 [2024-10-13 04:07:28.335530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:16:35.387 [2024-10-13 04:07:28.335539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.387 [2024-10-13 04:07:28.338151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.387 [2024-10-13 04:07:28.338180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.387 [2024-10-13 04:07:28.338194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:16:35.387 [2024-10-13 04:07:28.338202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.553899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.553939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.647 [2024-10-13 04:07:28.553960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 215.675 ms 00:16:35.647 [2024-10-13 04:07:28.553968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.560134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.560261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:35.647 [2024-10-13 04:07:28.560281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.133 ms 00:16:35.647 [2024-10-13 04:07:28.560288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.584423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.584454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.647 [2024-10-13 04:07:28.584466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.077 ms 00:16:35.647 [2024-10-13 04:07:28.584473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.599441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.599568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.647 [2024-10-13 04:07:28.599591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.932 ms 00:16:35.647 [2024-10-13 04:07:28.599599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.599748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.599760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.647 [2024-10-13 04:07:28.599772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:35.647 [2024-10-13 04:07:28.599780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.622828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.622858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:35.647 [2024-10-13 04:07:28.622871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.032 ms 00:16:35.647 [2024-10-13 04:07:28.622878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.646108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.646225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:35.647 [2024-10-13 04:07:28.646244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.196 ms 00:16:35.647 [2024-10-13 04:07:28.646252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.669161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.669273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.647 [2024-10-13 04:07:28.669291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.880 ms 00:16:35.647 [2024-10-13 04:07:28.669298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.691382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.647 [2024-10-13 04:07:28.691419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.647 [2024-10-13 04:07:28.691432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.023 ms 00:16:35.647 [2024-10-13 04:07:28.691438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.647 [2024-10-13 04:07:28.691483] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.647 [2024-10-13 04:07:28.691496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.647 [2024-10-13 04:07:28.691813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.691992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.648 [2024-10-13 04:07:28.692378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.648 [2024-10-13 04:07:28.692388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 66076d0c-14e8-4811-b24a-d12e8ee73f6c 00:16:35.648 [2024-10-13 04:07:28.692395] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.648 [2024-10-13 04:07:28.692404] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.648 [2024-10-13 04:07:28.692411] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.648 [2024-10-13 04:07:28.692420] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.648 [2024-10-13 04:07:28.692429] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.648 [2024-10-13 04:07:28.692438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.648 [2024-10-13 04:07:28.692445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.648 [2024-10-13 04:07:28.692455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.648 [2024-10-13 04:07:28.692461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.648 [2024-10-13 04:07:28.692469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.648 [2024-10-13 04:07:28.692477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.648 [2024-10-13 04:07:28.692486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:16:35.648 [2024-10-13 04:07:28.692494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.648 [2024-10-13 04:07:28.704779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.648 [2024-10-13 04:07:28.704807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.648 [2024-10-13 04:07:28.704821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.255 ms 00:16:35.648 [2024-10-13 04:07:28.704828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.648 [2024-10-13 04:07:28.705171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.648 [2024-10-13 04:07:28.705180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.648 [2024-10-13 04:07:28.705189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:16:35.648 [2024-10-13 04:07:28.705196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.648 [2024-10-13 04:07:28.739766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.648 [2024-10-13 04:07:28.739797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.648 [2024-10-13 04:07:28.739812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.648 [2024-10-13 04:07:28.739820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.648 [2024-10-13 04:07:28.739872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.648 [2024-10-13 04:07:28.739880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.648 [2024-10-13 04:07:28.739889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.648 [2024-10-13 04:07:28.739896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.648 [2024-10-13 04:07:28.739967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.648 [2024-10-13 04:07:28.739985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.648 [2024-10-13 04:07:28.739995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.648 [2024-10-13 04:07:28.740004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.648 [2024-10-13 04:07:28.740019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.648 [2024-10-13 04:07:28.740026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.649 [2024-10-13 04:07:28.740035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.649 [2024-10-13 04:07:28.740042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.818849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.819022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.907 [2024-10-13 04:07:28.819050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.819058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.881360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.881402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.907 [2024-10-13 04:07:28.881416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.881424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.881494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.881504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.907 [2024-10-13 04:07:28.881514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.881521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.881581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.881591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.907 [2024-10-13 04:07:28.881601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.881608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.881711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.881721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.907 [2024-10-13 04:07:28.881747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.881755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.881787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.881796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.907 [2024-10-13 04:07:28.881805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.881813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.881847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.881855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.907 [2024-10-13 04:07:28.881864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.881871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.881912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.907 [2024-10-13 04:07:28.881931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.907 [2024-10-13 04:07:28.881941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.907 [2024-10-13 04:07:28.881948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.907 [2024-10-13 04:07:28.882063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.222 ms, result 0 00:16:35.907 true 00:16:35.907 04:07:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73475 00:16:35.907 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73475 ']' 00:16:35.907 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73475 00:16:35.907 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:35.907 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.907 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73475 00:16:35.907 killing process with pid 73475 00:16:35.907 Received shutdown signal, test time was about 4.000000 seconds 00:16:35.907 00:16:35.907 Latency(us) 00:16:35.907 [2024-10-13T04:07:29.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.907 [2024-10-13T04:07:29.067Z] =================================================================================================================== 00:16:35.907 [2024-10-13T04:07:29.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:35.908 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.908 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.908 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73475' 00:16:35.908 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73475 00:16:35.908 04:07:28 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73475 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:41.172 Remove shared memory files 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:41.172 ************************************ 00:16:41.172 END TEST ftl_bdevperf 00:16:41.172 ************************************ 00:16:41.172 00:16:41.172 real 0m24.485s 00:16:41.172 user 0m26.883s 00:16:41.172 sys 0m0.817s 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:41.172 04:07:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.172 04:07:33 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:41.172 04:07:33 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:41.172 04:07:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:41.172 04:07:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:41.172 ************************************ 00:16:41.172 START TEST ftl_trim 00:16:41.172 ************************************ 00:16:41.172 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:41.172 * Looking for test storage... 00:16:41.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.172 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:41.172 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:16:41.172 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:41.172 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:41.172 04:07:33 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.173 04:07:33 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:41.173 04:07:33 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.173 04:07:33 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.173 04:07:33 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.173 04:07:33 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.173 --rc genhtml_branch_coverage=1 00:16:41.173 --rc genhtml_function_coverage=1 00:16:41.173 --rc genhtml_legend=1 00:16:41.173 --rc geninfo_all_blocks=1 00:16:41.173 --rc geninfo_unexecuted_blocks=1 00:16:41.173 00:16:41.173 ' 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.173 --rc genhtml_branch_coverage=1 00:16:41.173 --rc genhtml_function_coverage=1 00:16:41.173 --rc genhtml_legend=1 00:16:41.173 --rc geninfo_all_blocks=1 00:16:41.173 --rc geninfo_unexecuted_blocks=1 00:16:41.173 00:16:41.173 ' 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.173 --rc genhtml_branch_coverage=1 00:16:41.173 --rc genhtml_function_coverage=1 00:16:41.173 --rc genhtml_legend=1 00:16:41.173 --rc geninfo_all_blocks=1 00:16:41.173 --rc geninfo_unexecuted_blocks=1 00:16:41.173 00:16:41.173 ' 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:41.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.173 --rc genhtml_branch_coverage=1 00:16:41.173 --rc genhtml_function_coverage=1 00:16:41.173 --rc genhtml_legend=1 00:16:41.173 --rc geninfo_all_blocks=1 00:16:41.173 --rc geninfo_unexecuted_blocks=1 00:16:41.173 00:16:41.173 ' 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73809 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73809 00:16:41.173 04:07:33 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 73809 ']' 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:41.173 04:07:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:41.173 [2024-10-13 04:07:34.029335] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:16:41.173 [2024-10-13 04:07:34.029593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73809 ] 00:16:41.173 [2024-10-13 04:07:34.178882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:41.173 [2024-10-13 04:07:34.280782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.173 [2024-10-13 04:07:34.280907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.173 [2024-10-13 04:07:34.281017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.740 04:07:34 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.740 04:07:34 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:41.740 04:07:34 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:41.740 04:07:34 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:41.740 04:07:34 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:41.740 04:07:34 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:41.740 04:07:34 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:41.740 04:07:34 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:41.999 04:07:35 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:41.999 04:07:35 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:41.999 04:07:35 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:41.999 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:41.999 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:41.999 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:41.999 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:41.999 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:42.257 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:42.257 { 00:16:42.257 "name": "nvme0n1", 00:16:42.257 "aliases": [ 00:16:42.257 "af209814-b4c9-493f-8fb0-5ccab31ac47e" 00:16:42.257 ], 00:16:42.257 "product_name": "NVMe disk", 00:16:42.257 "block_size": 4096, 00:16:42.257 "num_blocks": 1310720, 00:16:42.257 "uuid": "af209814-b4c9-493f-8fb0-5ccab31ac47e", 00:16:42.257 "numa_id": -1, 00:16:42.257 "assigned_rate_limits": { 00:16:42.257 "rw_ios_per_sec": 0, 00:16:42.257 "rw_mbytes_per_sec": 0, 00:16:42.257 "r_mbytes_per_sec": 0, 00:16:42.257 "w_mbytes_per_sec": 0 00:16:42.257 }, 00:16:42.257 "claimed": true, 00:16:42.257 "claim_type": "read_many_write_one", 00:16:42.257 "zoned": false, 00:16:42.257 "supported_io_types": { 00:16:42.257 "read": true, 00:16:42.257 "write": true, 00:16:42.257 "unmap": true, 00:16:42.257 "flush": true, 00:16:42.257 "reset": true, 00:16:42.257 "nvme_admin": true, 00:16:42.257 "nvme_io": true, 00:16:42.257 "nvme_io_md": false, 00:16:42.257 "write_zeroes": true, 00:16:42.257 "zcopy": false, 00:16:42.257 "get_zone_info": false, 00:16:42.257 "zone_management": false, 00:16:42.257 "zone_append": false, 00:16:42.257 "compare": true, 00:16:42.257 "compare_and_write": false, 00:16:42.257 "abort": true, 00:16:42.257 "seek_hole": false, 00:16:42.257 "seek_data": false, 00:16:42.257 "copy": true, 00:16:42.257 "nvme_iov_md": false 00:16:42.257 }, 00:16:42.257 "driver_specific": { 00:16:42.257 "nvme": [ 00:16:42.257 { 00:16:42.257 "pci_address": "0000:00:11.0", 00:16:42.257 "trid": { 00:16:42.257 "trtype": "PCIe", 00:16:42.257 "traddr": "0000:00:11.0" 00:16:42.257 }, 00:16:42.257 "ctrlr_data": { 00:16:42.257 "cntlid": 0, 00:16:42.257 "vendor_id": "0x1b36", 00:16:42.257 "model_number": "QEMU NVMe Ctrl", 00:16:42.257 "serial_number": "12341", 00:16:42.257 "firmware_revision": "8.0.0", 00:16:42.257 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:42.257 "oacs": { 00:16:42.257 "security": 0, 00:16:42.257 "format": 1, 00:16:42.257 "firmware": 0, 00:16:42.257 "ns_manage": 1 00:16:42.257 }, 00:16:42.257 "multi_ctrlr": false, 00:16:42.257 "ana_reporting": false 00:16:42.257 }, 00:16:42.257 "vs": { 00:16:42.257 "nvme_version": "1.4" 00:16:42.257 }, 00:16:42.257 "ns_data": { 00:16:42.257 "id": 1, 00:16:42.257 "can_share": false 00:16:42.257 } 00:16:42.257 } 00:16:42.257 ], 00:16:42.257 "mp_policy": "active_passive" 00:16:42.257 } 00:16:42.257 } 00:16:42.257 ]' 00:16:42.258 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:42.258 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:42.258 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:42.516 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:42.516 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:42.516 04:07:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=13181370-6e7b-4730-95ff-d7d74a24bf2d 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:42.516 04:07:35 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13181370-6e7b-4730-95ff-d7d74a24bf2d 00:16:42.774 04:07:35 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:43.032 04:07:36 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e90c0766-aa15-4616-9f55-d311ab8d3c28 00:16:43.032 04:07:36 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e90c0766-aa15-4616-9f55-d311ab8d3c28 00:16:43.290 04:07:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.290 04:07:36 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.290 04:07:36 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:43.290 04:07:36 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:43.290 04:07:36 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.290 04:07:36 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:43.290 04:07:36 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.290 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.290 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:43.290 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:43.290 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:43.290 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.548 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:43.548 { 00:16:43.548 "name": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:43.548 "aliases": [ 00:16:43.548 "lvs/nvme0n1p0" 00:16:43.548 ], 00:16:43.548 "product_name": "Logical Volume", 00:16:43.548 "block_size": 4096, 00:16:43.548 "num_blocks": 26476544, 00:16:43.548 "uuid": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:43.548 "assigned_rate_limits": { 00:16:43.548 "rw_ios_per_sec": 0, 00:16:43.548 "rw_mbytes_per_sec": 0, 00:16:43.548 "r_mbytes_per_sec": 0, 00:16:43.548 "w_mbytes_per_sec": 0 00:16:43.548 }, 00:16:43.548 "claimed": false, 00:16:43.548 "zoned": false, 00:16:43.548 "supported_io_types": { 00:16:43.548 "read": true, 00:16:43.548 "write": true, 00:16:43.548 "unmap": true, 00:16:43.548 "flush": false, 00:16:43.548 "reset": true, 00:16:43.548 "nvme_admin": false, 00:16:43.548 "nvme_io": false, 00:16:43.548 "nvme_io_md": false, 00:16:43.548 "write_zeroes": true, 00:16:43.548 "zcopy": false, 00:16:43.548 "get_zone_info": false, 00:16:43.548 "zone_management": false, 00:16:43.548 "zone_append": false, 00:16:43.548 "compare": false, 00:16:43.548 "compare_and_write": false, 00:16:43.548 "abort": false, 00:16:43.548 "seek_hole": true, 00:16:43.548 "seek_data": true, 00:16:43.548 "copy": false, 00:16:43.548 "nvme_iov_md": false 00:16:43.548 }, 00:16:43.548 "driver_specific": { 00:16:43.548 "lvol": { 00:16:43.548 "lvol_store_uuid": "e90c0766-aa15-4616-9f55-d311ab8d3c28", 00:16:43.548 "base_bdev": "nvme0n1", 00:16:43.548 "thin_provision": true, 00:16:43.548 "num_allocated_clusters": 0, 00:16:43.548 "snapshot": false, 00:16:43.548 "clone": false, 00:16:43.548 "esnap_clone": false 00:16:43.548 } 00:16:43.548 } 00:16:43.548 } 00:16:43.548 ]' 00:16:43.548 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:43.548 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:43.548 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:43.548 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:43.548 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:43.548 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:43.548 04:07:36 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:43.548 04:07:36 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:43.548 04:07:36 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:43.805 04:07:36 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:43.805 04:07:36 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:43.805 04:07:36 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.805 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:43.805 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:43.805 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:43.805 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:43.805 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:44.063 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:44.063 { 00:16:44.063 "name": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:44.063 "aliases": [ 00:16:44.063 "lvs/nvme0n1p0" 00:16:44.063 ], 00:16:44.063 "product_name": "Logical Volume", 00:16:44.063 "block_size": 4096, 00:16:44.063 "num_blocks": 26476544, 00:16:44.063 "uuid": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:44.063 "assigned_rate_limits": { 00:16:44.063 "rw_ios_per_sec": 0, 00:16:44.063 "rw_mbytes_per_sec": 0, 00:16:44.063 "r_mbytes_per_sec": 0, 00:16:44.063 "w_mbytes_per_sec": 0 00:16:44.063 }, 00:16:44.063 "claimed": false, 00:16:44.063 "zoned": false, 00:16:44.063 "supported_io_types": { 00:16:44.063 "read": true, 00:16:44.063 "write": true, 00:16:44.063 "unmap": true, 00:16:44.063 "flush": false, 00:16:44.063 "reset": true, 00:16:44.063 "nvme_admin": false, 00:16:44.063 "nvme_io": false, 00:16:44.063 "nvme_io_md": false, 00:16:44.063 "write_zeroes": true, 00:16:44.063 "zcopy": false, 00:16:44.063 "get_zone_info": false, 00:16:44.063 "zone_management": false, 00:16:44.063 "zone_append": false, 00:16:44.063 "compare": false, 00:16:44.063 "compare_and_write": false, 00:16:44.063 "abort": false, 00:16:44.063 "seek_hole": true, 00:16:44.063 "seek_data": true, 00:16:44.063 "copy": false, 00:16:44.063 "nvme_iov_md": false 00:16:44.063 }, 00:16:44.063 "driver_specific": { 00:16:44.063 "lvol": { 00:16:44.063 "lvol_store_uuid": "e90c0766-aa15-4616-9f55-d311ab8d3c28", 00:16:44.063 "base_bdev": "nvme0n1", 00:16:44.063 "thin_provision": true, 00:16:44.063 "num_allocated_clusters": 0, 00:16:44.063 "snapshot": false, 00:16:44.063 "clone": false, 00:16:44.063 "esnap_clone": false 00:16:44.063 } 00:16:44.063 } 00:16:44.063 } 00:16:44.063 ]' 00:16:44.063 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:44.063 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:44.063 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:44.063 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:44.063 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:44.063 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:44.063 04:07:37 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:44.063 04:07:37 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:44.320 04:07:37 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:44.320 04:07:37 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:44.320 04:07:37 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:44.320 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:44.320 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:44.321 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:44.321 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:44.321 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e703bbde-7653-42b6-afe6-d8262b4471f3 00:16:44.321 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:44.321 { 00:16:44.321 "name": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:44.321 "aliases": [ 00:16:44.321 "lvs/nvme0n1p0" 00:16:44.321 ], 00:16:44.321 "product_name": "Logical Volume", 00:16:44.321 "block_size": 4096, 00:16:44.321 "num_blocks": 26476544, 00:16:44.321 "uuid": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:44.321 "assigned_rate_limits": { 00:16:44.321 "rw_ios_per_sec": 0, 00:16:44.321 "rw_mbytes_per_sec": 0, 00:16:44.321 "r_mbytes_per_sec": 0, 00:16:44.321 "w_mbytes_per_sec": 0 00:16:44.321 }, 00:16:44.321 "claimed": false, 00:16:44.321 "zoned": false, 00:16:44.321 "supported_io_types": { 00:16:44.321 "read": true, 00:16:44.321 "write": true, 00:16:44.321 "unmap": true, 00:16:44.321 "flush": false, 00:16:44.321 "reset": true, 00:16:44.321 "nvme_admin": false, 00:16:44.321 "nvme_io": false, 00:16:44.321 "nvme_io_md": false, 00:16:44.321 "write_zeroes": true, 00:16:44.321 "zcopy": false, 00:16:44.321 "get_zone_info": false, 00:16:44.321 "zone_management": false, 00:16:44.321 "zone_append": false, 00:16:44.321 "compare": false, 00:16:44.321 "compare_and_write": false, 00:16:44.321 "abort": false, 00:16:44.321 "seek_hole": true, 00:16:44.321 "seek_data": true, 00:16:44.321 "copy": false, 00:16:44.321 "nvme_iov_md": false 00:16:44.321 }, 00:16:44.321 "driver_specific": { 00:16:44.321 "lvol": { 00:16:44.321 "lvol_store_uuid": "e90c0766-aa15-4616-9f55-d311ab8d3c28", 00:16:44.321 "base_bdev": "nvme0n1", 00:16:44.321 "thin_provision": true, 00:16:44.321 "num_allocated_clusters": 0, 00:16:44.321 "snapshot": false, 00:16:44.321 "clone": false, 00:16:44.321 "esnap_clone": false 00:16:44.321 } 00:16:44.321 } 00:16:44.321 } 00:16:44.321 ]' 00:16:44.321 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:44.580 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:44.580 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:44.580 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:44.580 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:44.580 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:44.580 04:07:37 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:44.580 04:07:37 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e703bbde-7653-42b6-afe6-d8262b4471f3 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:44.580 [2024-10-13 04:07:37.717477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.580 [2024-10-13 04:07:37.717525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.580 [2024-10-13 04:07:37.717542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:44.580 [2024-10-13 04:07:37.717551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.580 [2024-10-13 04:07:37.720296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.580 [2024-10-13 04:07:37.720332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.580 [2024-10-13 04:07:37.720345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:16:44.580 [2024-10-13 04:07:37.720353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.580 [2024-10-13 04:07:37.720461] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.580 [2024-10-13 04:07:37.721135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.580 [2024-10-13 04:07:37.721165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.580 [2024-10-13 04:07:37.721172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.580 [2024-10-13 04:07:37.721183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:16:44.580 [2024-10-13 04:07:37.721190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.580 [2024-10-13 04:07:37.721397] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:16:44.580 [2024-10-13 04:07:37.722432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.580 [2024-10-13 04:07:37.722465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:44.580 [2024-10-13 04:07:37.722476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:44.580 [2024-10-13 04:07:37.722487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.580 [2024-10-13 04:07:37.727687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.580 [2024-10-13 04:07:37.727716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.580 [2024-10-13 04:07:37.727724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.125 ms 00:16:44.580 [2024-10-13 04:07:37.727733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.580 [2024-10-13 04:07:37.727853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.580 [2024-10-13 04:07:37.727868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.580 [2024-10-13 04:07:37.727876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:44.580 [2024-10-13 04:07:37.727888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.580 [2024-10-13 04:07:37.727922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.580 [2024-10-13 04:07:37.727931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.581 [2024-10-13 04:07:37.727939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:44.581 [2024-10-13 04:07:37.727948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.581 [2024-10-13 04:07:37.727976] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:44.581 [2024-10-13 04:07:37.731507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.581 [2024-10-13 04:07:37.731536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.581 [2024-10-13 04:07:37.731546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.533 ms 00:16:44.581 [2024-10-13 04:07:37.731553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.581 [2024-10-13 04:07:37.731599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.581 [2024-10-13 04:07:37.731607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.581 [2024-10-13 04:07:37.731631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:44.581 [2024-10-13 04:07:37.731650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.581 [2024-10-13 04:07:37.731682] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:44.581 [2024-10-13 04:07:37.731814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.581 [2024-10-13 04:07:37.731828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.581 [2024-10-13 04:07:37.731840] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:44.581 [2024-10-13 04:07:37.731852] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.581 [2024-10-13 04:07:37.731860] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.581 [2024-10-13 04:07:37.731869] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:44.581 [2024-10-13 04:07:37.731876] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.581 [2024-10-13 04:07:37.731885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.581 [2024-10-13 04:07:37.731892] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.581 [2024-10-13 04:07:37.731901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.581 [2024-10-13 04:07:37.731907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.581 [2024-10-13 04:07:37.731919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:16:44.581 [2024-10-13 04:07:37.731926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.581 [2024-10-13 04:07:37.732040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.581 [2024-10-13 04:07:37.732054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.581 [2024-10-13 04:07:37.732064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:44.581 [2024-10-13 04:07:37.732071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.581 [2024-10-13 04:07:37.732193] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.581 [2024-10-13 04:07:37.732202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.581 [2024-10-13 04:07:37.732212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.581 [2024-10-13 04:07:37.732237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.581 [2024-10-13 04:07:37.732261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.581 [2024-10-13 04:07:37.732276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.581 [2024-10-13 04:07:37.732282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:44.581 [2024-10-13 04:07:37.732290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.581 [2024-10-13 04:07:37.732296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.581 [2024-10-13 04:07:37.732304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:44.581 [2024-10-13 04:07:37.732311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.581 [2024-10-13 04:07:37.732327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.581 [2024-10-13 04:07:37.732351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.581 [2024-10-13 04:07:37.732373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.581 [2024-10-13 04:07:37.732396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.581 [2024-10-13 04:07:37.732417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.581 [2024-10-13 04:07:37.732440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.581 [2024-10-13 04:07:37.732455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.581 [2024-10-13 04:07:37.732462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:44.581 [2024-10-13 04:07:37.732470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.581 [2024-10-13 04:07:37.732476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.581 [2024-10-13 04:07:37.732484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:44.581 [2024-10-13 04:07:37.732491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.581 [2024-10-13 04:07:37.732505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:44.581 [2024-10-13 04:07:37.732513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732519] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.581 [2024-10-13 04:07:37.732528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.581 [2024-10-13 04:07:37.732534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.581 [2024-10-13 04:07:37.732550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.581 [2024-10-13 04:07:37.732560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.581 [2024-10-13 04:07:37.732567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.581 [2024-10-13 04:07:37.732575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.581 [2024-10-13 04:07:37.732581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.581 [2024-10-13 04:07:37.732589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.581 [2024-10-13 04:07:37.732599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.581 [2024-10-13 04:07:37.732623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.581 [2024-10-13 04:07:37.732632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:44.581 [2024-10-13 04:07:37.732641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:44.581 [2024-10-13 04:07:37.732648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:44.581 [2024-10-13 04:07:37.732656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:44.581 [2024-10-13 04:07:37.732663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:44.581 [2024-10-13 04:07:37.732672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:44.581 [2024-10-13 04:07:37.732679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:44.581 [2024-10-13 04:07:37.732687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:44.581 [2024-10-13 04:07:37.732694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:44.581 [2024-10-13 04:07:37.732705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:44.581 [2024-10-13 04:07:37.732712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:44.581 [2024-10-13 04:07:37.732720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:44.581 [2024-10-13 04:07:37.732727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:44.581 [2024-10-13 04:07:37.732736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:44.581 [2024-10-13 04:07:37.732744] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.581 [2024-10-13 04:07:37.732753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.581 [2024-10-13 04:07:37.732761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.581 [2024-10-13 04:07:37.732771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.581 [2024-10-13 04:07:37.732778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.581 [2024-10-13 04:07:37.732786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.582 [2024-10-13 04:07:37.732794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.582 [2024-10-13 04:07:37.732806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.582 [2024-10-13 04:07:37.732813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:16:44.582 [2024-10-13 04:07:37.732822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.582 [2024-10-13 04:07:37.732897] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:44.582 [2024-10-13 04:07:37.732909] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:47.112 [2024-10-13 04:07:39.799518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.799726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:47.112 [2024-10-13 04:07:39.799799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2066.609 ms 00:16:47.112 [2024-10-13 04:07:39.799828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.825031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.825180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.112 [2024-10-13 04:07:39.825237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.920 ms 00:16:47.112 [2024-10-13 04:07:39.825262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.825419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.825528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:47.112 [2024-10-13 04:07:39.825552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:47.112 [2024-10-13 04:07:39.825576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.863373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.863518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.112 [2024-10-13 04:07:39.863578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.721 ms 00:16:47.112 [2024-10-13 04:07:39.863609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.863708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.863738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.112 [2024-10-13 04:07:39.863802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:47.112 [2024-10-13 04:07:39.863827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.864164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.864264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.112 [2024-10-13 04:07:39.864322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:16:47.112 [2024-10-13 04:07:39.864345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.864472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.864497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.112 [2024-10-13 04:07:39.864547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:16:47.112 [2024-10-13 04:07:39.864572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.879670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.879783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.112 [2024-10-13 04:07:39.879888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.030 ms 00:16:47.112 [2024-10-13 04:07:39.879916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.891328] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:47.112 [2024-10-13 04:07:39.905583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.905704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:47.112 [2024-10-13 04:07:39.905753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.509 ms 00:16:47.112 [2024-10-13 04:07:39.905775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.969402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.969544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:47.112 [2024-10-13 04:07:39.969565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.547 ms 00:16:47.112 [2024-10-13 04:07:39.969576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.969791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.969803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:47.112 [2024-10-13 04:07:39.969815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:16:47.112 [2024-10-13 04:07:39.969823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:39.992019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:39.992134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:47.112 [2024-10-13 04:07:39.992156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.164 ms 00:16:47.112 [2024-10-13 04:07:39.992163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.014672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.014793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:47.112 [2024-10-13 04:07:40.014811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.452 ms 00:16:47.112 [2024-10-13 04:07:40.014818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.015411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.015432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:47.112 [2024-10-13 04:07:40.015443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:16:47.112 [2024-10-13 04:07:40.015450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.088044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.088100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:47.112 [2024-10-13 04:07:40.088118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.558 ms 00:16:47.112 [2024-10-13 04:07:40.088127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.112376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.112416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:47.112 [2024-10-13 04:07:40.112430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.126 ms 00:16:47.112 [2024-10-13 04:07:40.112438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.135243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.135275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:47.112 [2024-10-13 04:07:40.135287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.748 ms 00:16:47.112 [2024-10-13 04:07:40.135294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.158137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.158264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:47.112 [2024-10-13 04:07:40.158284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.758 ms 00:16:47.112 [2024-10-13 04:07:40.158303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.158364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.158374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:47.112 [2024-10-13 04:07:40.158386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:47.112 [2024-10-13 04:07:40.158394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.158471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.112 [2024-10-13 04:07:40.158479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:47.112 [2024-10-13 04:07:40.158489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:47.112 [2024-10-13 04:07:40.158496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.112 [2024-10-13 04:07:40.159258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:47.112 [2024-10-13 04:07:40.162247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2441.503 ms, result 0 00:16:47.112 [2024-10-13 04:07:40.162935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:47.112 { 00:16:47.112 "name": "ftl0", 00:16:47.112 "uuid": "44f1066e-b8f1-4a97-9db9-72a27b163b91" 00:16:47.112 } 00:16:47.112 04:07:40 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:47.112 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:47.112 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:47.112 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:47.112 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:47.112 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:47.112 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:47.370 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:47.628 [ 00:16:47.628 { 00:16:47.628 "name": "ftl0", 00:16:47.628 "aliases": [ 00:16:47.628 "44f1066e-b8f1-4a97-9db9-72a27b163b91" 00:16:47.628 ], 00:16:47.628 "product_name": "FTL disk", 00:16:47.628 "block_size": 4096, 00:16:47.628 "num_blocks": 23592960, 00:16:47.628 "uuid": "44f1066e-b8f1-4a97-9db9-72a27b163b91", 00:16:47.628 "assigned_rate_limits": { 00:16:47.628 "rw_ios_per_sec": 0, 00:16:47.628 "rw_mbytes_per_sec": 0, 00:16:47.628 "r_mbytes_per_sec": 0, 00:16:47.628 "w_mbytes_per_sec": 0 00:16:47.628 }, 00:16:47.628 "claimed": false, 00:16:47.628 "zoned": false, 00:16:47.628 "supported_io_types": { 00:16:47.628 "read": true, 00:16:47.628 "write": true, 00:16:47.628 "unmap": true, 00:16:47.628 "flush": true, 00:16:47.628 "reset": false, 00:16:47.628 "nvme_admin": false, 00:16:47.628 "nvme_io": false, 00:16:47.628 "nvme_io_md": false, 00:16:47.628 "write_zeroes": true, 00:16:47.628 "zcopy": false, 00:16:47.628 "get_zone_info": false, 00:16:47.628 "zone_management": false, 00:16:47.628 "zone_append": false, 00:16:47.628 "compare": false, 00:16:47.628 "compare_and_write": false, 00:16:47.628 "abort": false, 00:16:47.628 "seek_hole": false, 00:16:47.628 "seek_data": false, 00:16:47.628 "copy": false, 00:16:47.628 "nvme_iov_md": false 00:16:47.628 }, 00:16:47.628 "driver_specific": { 00:16:47.628 "ftl": { 00:16:47.628 "base_bdev": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:47.628 "cache": "nvc0n1p0" 00:16:47.628 } 00:16:47.628 } 00:16:47.628 } 00:16:47.628 ] 00:16:47.628 04:07:40 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:47.628 04:07:40 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:47.628 04:07:40 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:47.628 04:07:40 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:47.628 04:07:40 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:47.901 04:07:40 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:47.901 { 00:16:47.901 "name": "ftl0", 00:16:47.901 "aliases": [ 00:16:47.901 "44f1066e-b8f1-4a97-9db9-72a27b163b91" 00:16:47.901 ], 00:16:47.901 "product_name": "FTL disk", 00:16:47.901 "block_size": 4096, 00:16:47.901 "num_blocks": 23592960, 00:16:47.901 "uuid": "44f1066e-b8f1-4a97-9db9-72a27b163b91", 00:16:47.901 "assigned_rate_limits": { 00:16:47.901 "rw_ios_per_sec": 0, 00:16:47.901 "rw_mbytes_per_sec": 0, 00:16:47.901 "r_mbytes_per_sec": 0, 00:16:47.901 "w_mbytes_per_sec": 0 00:16:47.901 }, 00:16:47.901 "claimed": false, 00:16:47.901 "zoned": false, 00:16:47.901 "supported_io_types": { 00:16:47.901 "read": true, 00:16:47.901 "write": true, 00:16:47.901 "unmap": true, 00:16:47.901 "flush": true, 00:16:47.901 "reset": false, 00:16:47.901 "nvme_admin": false, 00:16:47.901 "nvme_io": false, 00:16:47.901 "nvme_io_md": false, 00:16:47.901 "write_zeroes": true, 00:16:47.901 "zcopy": false, 00:16:47.901 "get_zone_info": false, 00:16:47.901 "zone_management": false, 00:16:47.901 "zone_append": false, 00:16:47.901 "compare": false, 00:16:47.901 "compare_and_write": false, 00:16:47.901 "abort": false, 00:16:47.901 "seek_hole": false, 00:16:47.901 "seek_data": false, 00:16:47.901 "copy": false, 00:16:47.901 "nvme_iov_md": false 00:16:47.901 }, 00:16:47.901 "driver_specific": { 00:16:47.901 "ftl": { 00:16:47.901 "base_bdev": "e703bbde-7653-42b6-afe6-d8262b4471f3", 00:16:47.901 "cache": "nvc0n1p0" 00:16:47.901 } 00:16:47.901 } 00:16:47.901 } 00:16:47.901 ]' 00:16:47.901 04:07:40 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:47.901 04:07:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:47.901 04:07:41 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:48.186 [2024-10-13 04:07:41.178205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.186 [2024-10-13 04:07:41.178250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:48.186 [2024-10-13 04:07:41.178263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.186 [2024-10-13 04:07:41.178273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.186 [2024-10-13 04:07:41.178308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:48.187 [2024-10-13 04:07:41.180917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.181047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:48.187 [2024-10-13 04:07:41.181070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:16:48.187 [2024-10-13 04:07:41.181079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.181560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.181577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:48.187 [2024-10-13 04:07:41.181587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:16:48.187 [2024-10-13 04:07:41.181594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.185252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.185272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:48.187 [2024-10-13 04:07:41.185284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:16:48.187 [2024-10-13 04:07:41.185295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.192259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.192371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:48.187 [2024-10-13 04:07:41.192388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:16:48.187 [2024-10-13 04:07:41.192396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.215788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.215820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:48.187 [2024-10-13 04:07:41.215835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.322 ms 00:16:48.187 [2024-10-13 04:07:41.215842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.230335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.230368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:48.187 [2024-10-13 04:07:41.230381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.431 ms 00:16:48.187 [2024-10-13 04:07:41.230389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.230577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.230590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:48.187 [2024-10-13 04:07:41.230600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:16:48.187 [2024-10-13 04:07:41.230607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.253625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.253657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:48.187 [2024-10-13 04:07:41.253669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.974 ms 00:16:48.187 [2024-10-13 04:07:41.253676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.276162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.276192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:48.187 [2024-10-13 04:07:41.276206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.429 ms 00:16:48.187 [2024-10-13 04:07:41.276213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.298722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.298842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:48.187 [2024-10-13 04:07:41.298860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.451 ms 00:16:48.187 [2024-10-13 04:07:41.298867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.321454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.187 [2024-10-13 04:07:41.321484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:48.187 [2024-10-13 04:07:41.321496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.495 ms 00:16:48.187 [2024-10-13 04:07:41.321504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.187 [2024-10-13 04:07:41.321562] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:48.187 [2024-10-13 04:07:41.321577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.321996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:48.187 [2024-10-13 04:07:41.322123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:48.188 [2024-10-13 04:07:41.322472] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:48.188 [2024-10-13 04:07:41.322482] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:16:48.188 [2024-10-13 04:07:41.322490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:48.188 [2024-10-13 04:07:41.322498] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:48.188 [2024-10-13 04:07:41.322505] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:48.188 [2024-10-13 04:07:41.322513] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:48.188 [2024-10-13 04:07:41.322520] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:48.188 [2024-10-13 04:07:41.322529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:48.188 [2024-10-13 04:07:41.322536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:48.188 [2024-10-13 04:07:41.322544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:48.188 [2024-10-13 04:07:41.322550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:48.188 [2024-10-13 04:07:41.322559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.188 [2024-10-13 04:07:41.322568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:48.188 [2024-10-13 04:07:41.322579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:16:48.188 [2024-10-13 04:07:41.322586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.334986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.463 [2024-10-13 04:07:41.335014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:48.463 [2024-10-13 04:07:41.335027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.351 ms 00:16:48.463 [2024-10-13 04:07:41.335035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.335407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.463 [2024-10-13 04:07:41.335417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:48.463 [2024-10-13 04:07:41.335427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:16:48.463 [2024-10-13 04:07:41.335434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.379130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.379164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:48.463 [2024-10-13 04:07:41.379175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.379183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.379287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.379296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:48.463 [2024-10-13 04:07:41.379306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.379313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.379373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.379382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:48.463 [2024-10-13 04:07:41.379393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.379400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.379427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.379434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:48.463 [2024-10-13 04:07:41.379443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.379450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.460634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.460808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:48.463 [2024-10-13 04:07:41.460827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.460836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.523441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.523480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:48.463 [2024-10-13 04:07:41.523492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.523500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.523581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.523591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.463 [2024-10-13 04:07:41.523639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.523647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.523709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.523720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.463 [2024-10-13 04:07:41.523729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.523736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.523839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.523849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.463 [2024-10-13 04:07:41.523859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.523866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.523917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.523926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:48.463 [2024-10-13 04:07:41.523937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.523944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.524002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.524012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.463 [2024-10-13 04:07:41.524023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.524032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.524086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:48.463 [2024-10-13 04:07:41.524096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.463 [2024-10-13 04:07:41.524107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:48.463 [2024-10-13 04:07:41.524114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.463 [2024-10-13 04:07:41.524276] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.054 ms, result 0 00:16:48.463 true 00:16:48.463 04:07:41 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73809 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 73809 ']' 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 73809 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73809 00:16:48.463 killing process with pid 73809 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73809' 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 73809 00:16:48.463 04:07:41 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 73809 00:16:55.024 04:07:47 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:55.958 65536+0 records in 00:16:55.958 65536+0 records out 00:16:55.958 268435456 bytes (268 MB, 256 MiB) copied, 1.06924 s, 251 MB/s 00:16:55.958 04:07:48 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:55.958 [2024-10-13 04:07:48.924892] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:16:55.958 [2024-10-13 04:07:48.925010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73994 ] 00:16:55.958 [2024-10-13 04:07:49.076360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.216 [2024-10-13 04:07:49.172737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.474 [2024-10-13 04:07:49.425530] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.474 [2024-10-13 04:07:49.425593] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:56.474 [2024-10-13 04:07:49.583705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.474 [2024-10-13 04:07:49.583753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:56.474 [2024-10-13 04:07:49.583765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.474 [2024-10-13 04:07:49.583773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.474 [2024-10-13 04:07:49.586837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.474 [2024-10-13 04:07:49.586978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.474 [2024-10-13 04:07:49.586996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.045 ms 00:16:56.474 [2024-10-13 04:07:49.587004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.474 [2024-10-13 04:07:49.587392] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:56.474 [2024-10-13 04:07:49.588164] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:56.474 [2024-10-13 04:07:49.588197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.474 [2024-10-13 04:07:49.588206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.475 [2024-10-13 04:07:49.588215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:16:56.475 [2024-10-13 04:07:49.588223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.589323] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:56.475 [2024-10-13 04:07:49.601723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.601865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:56.475 [2024-10-13 04:07:49.601882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.402 ms 00:16:56.475 [2024-10-13 04:07:49.601895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.601975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.601986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:56.475 [2024-10-13 04:07:49.601994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:56.475 [2024-10-13 04:07:49.602001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.606915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.606949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.475 [2024-10-13 04:07:49.606958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.875 ms 00:16:56.475 [2024-10-13 04:07:49.606965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.607049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.607058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.475 [2024-10-13 04:07:49.607066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:56.475 [2024-10-13 04:07:49.607073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.607096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.607104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:56.475 [2024-10-13 04:07:49.607111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:56.475 [2024-10-13 04:07:49.607121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.607140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:56.475 [2024-10-13 04:07:49.610361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.610388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.475 [2024-10-13 04:07:49.610396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:16:56.475 [2024-10-13 04:07:49.610403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.610437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.610445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:56.475 [2024-10-13 04:07:49.610452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:56.475 [2024-10-13 04:07:49.610459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.610476] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:56.475 [2024-10-13 04:07:49.610493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:56.475 [2024-10-13 04:07:49.610529] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:56.475 [2024-10-13 04:07:49.610543] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:56.475 [2024-10-13 04:07:49.610657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:56.475 [2024-10-13 04:07:49.610667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:56.475 [2024-10-13 04:07:49.610678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:56.475 [2024-10-13 04:07:49.610687] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:56.475 [2024-10-13 04:07:49.610697] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:56.475 [2024-10-13 04:07:49.610705] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:56.475 [2024-10-13 04:07:49.610714] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:56.475 [2024-10-13 04:07:49.610721] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:56.475 [2024-10-13 04:07:49.610728] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:56.475 [2024-10-13 04:07:49.610735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.610743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:56.475 [2024-10-13 04:07:49.610750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:16:56.475 [2024-10-13 04:07:49.610757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.610844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.610852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:56.475 [2024-10-13 04:07:49.610859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:56.475 [2024-10-13 04:07:49.610868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.475 [2024-10-13 04:07:49.610967] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:56.475 [2024-10-13 04:07:49.610976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:56.475 [2024-10-13 04:07:49.610984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.475 [2024-10-13 04:07:49.610997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:56.475 [2024-10-13 04:07:49.611010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:56.475 [2024-10-13 04:07:49.611030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.475 [2024-10-13 04:07:49.611043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:56.475 [2024-10-13 04:07:49.611049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:56.475 [2024-10-13 04:07:49.611056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:56.475 [2024-10-13 04:07:49.611067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:56.475 [2024-10-13 04:07:49.611076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:56.475 [2024-10-13 04:07:49.611083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:56.475 [2024-10-13 04:07:49.611097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:56.475 [2024-10-13 04:07:49.611116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:56.475 [2024-10-13 04:07:49.611135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:56.475 [2024-10-13 04:07:49.611154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:56.475 [2024-10-13 04:07:49.611172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:56.475 [2024-10-13 04:07:49.611191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.475 [2024-10-13 04:07:49.611204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:56.475 [2024-10-13 04:07:49.611210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:56.475 [2024-10-13 04:07:49.611216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:56.475 [2024-10-13 04:07:49.611222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:56.475 [2024-10-13 04:07:49.611229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:56.475 [2024-10-13 04:07:49.611235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:56.475 [2024-10-13 04:07:49.611247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:56.475 [2024-10-13 04:07:49.611253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:56.475 [2024-10-13 04:07:49.611267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:56.475 [2024-10-13 04:07:49.611276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:56.475 [2024-10-13 04:07:49.611292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:56.475 [2024-10-13 04:07:49.611298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:56.475 [2024-10-13 04:07:49.611305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:56.475 [2024-10-13 04:07:49.611312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:56.475 [2024-10-13 04:07:49.611318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:56.475 [2024-10-13 04:07:49.611324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:56.475 [2024-10-13 04:07:49.611332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:56.475 [2024-10-13 04:07:49.611343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.475 [2024-10-13 04:07:49.611351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:56.475 [2024-10-13 04:07:49.611358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:56.475 [2024-10-13 04:07:49.611365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:56.475 [2024-10-13 04:07:49.611372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:56.475 [2024-10-13 04:07:49.611379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:56.475 [2024-10-13 04:07:49.611386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:56.475 [2024-10-13 04:07:49.611393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:56.475 [2024-10-13 04:07:49.611400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:56.475 [2024-10-13 04:07:49.611406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:56.475 [2024-10-13 04:07:49.611413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:56.475 [2024-10-13 04:07:49.611420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:56.475 [2024-10-13 04:07:49.611427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:56.475 [2024-10-13 04:07:49.611434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:56.475 [2024-10-13 04:07:49.611441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:56.475 [2024-10-13 04:07:49.611448] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:56.475 [2024-10-13 04:07:49.611456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:56.475 [2024-10-13 04:07:49.611463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:56.475 [2024-10-13 04:07:49.611470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:56.475 [2024-10-13 04:07:49.611477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:56.475 [2024-10-13 04:07:49.611484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:56.475 [2024-10-13 04:07:49.611491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.475 [2024-10-13 04:07:49.611498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:56.475 [2024-10-13 04:07:49.611506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:16:56.475 [2024-10-13 04:07:49.611516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.637184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.637323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.734 [2024-10-13 04:07:49.637338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.608 ms 00:16:56.734 [2024-10-13 04:07:49.637346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.637464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.637473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.734 [2024-10-13 04:07:49.637482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:56.734 [2024-10-13 04:07:49.637493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.679689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.679727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.734 [2024-10-13 04:07:49.679739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.176 ms 00:16:56.734 [2024-10-13 04:07:49.679747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.679840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.679852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.734 [2024-10-13 04:07:49.679861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:56.734 [2024-10-13 04:07:49.679868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.680214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.680228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.734 [2024-10-13 04:07:49.680243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:16:56.734 [2024-10-13 04:07:49.680250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.680376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.680393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.734 [2024-10-13 04:07:49.680401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:16:56.734 [2024-10-13 04:07:49.680409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.693746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.693883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.734 [2024-10-13 04:07:49.693899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.318 ms 00:16:56.734 [2024-10-13 04:07:49.693907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.706796] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:56.734 [2024-10-13 04:07:49.706840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:56.734 [2024-10-13 04:07:49.706852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.706860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:56.734 [2024-10-13 04:07:49.706868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.843 ms 00:16:56.734 [2024-10-13 04:07:49.706875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.731432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.731464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:56.734 [2024-10-13 04:07:49.731482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.488 ms 00:16:56.734 [2024-10-13 04:07:49.731491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.743402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.743432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:56.734 [2024-10-13 04:07:49.743442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.842 ms 00:16:56.734 [2024-10-13 04:07:49.743448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.755239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.755268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:56.734 [2024-10-13 04:07:49.755279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.729 ms 00:16:56.734 [2024-10-13 04:07:49.755286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.755906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.755924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.734 [2024-10-13 04:07:49.755935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:16:56.734 [2024-10-13 04:07:49.755942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.811835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.811893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:56.734 [2024-10-13 04:07:49.811910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.871 ms 00:16:56.734 [2024-10-13 04:07:49.811918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.822088] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:56.734 [2024-10-13 04:07:49.835719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.835756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.734 [2024-10-13 04:07:49.835767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.697 ms 00:16:56.734 [2024-10-13 04:07:49.835775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.835851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.835861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:56.734 [2024-10-13 04:07:49.835873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:56.734 [2024-10-13 04:07:49.835880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.835923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.835937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.734 [2024-10-13 04:07:49.835945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:56.734 [2024-10-13 04:07:49.835952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.835975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.835983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.734 [2024-10-13 04:07:49.836012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:56.734 [2024-10-13 04:07:49.836021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.836050] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:56.734 [2024-10-13 04:07:49.836059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.836067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:56.734 [2024-10-13 04:07:49.836074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:56.734 [2024-10-13 04:07:49.836082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.859530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.859563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.734 [2024-10-13 04:07:49.859578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.428 ms 00:16:56.734 [2024-10-13 04:07:49.859586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.859688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.734 [2024-10-13 04:07:49.859699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.734 [2024-10-13 04:07:49.859721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:56.734 [2024-10-13 04:07:49.859729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.734 [2024-10-13 04:07:49.861153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:56.734 [2024-10-13 04:07:49.864070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.169 ms, result 0 00:16:56.734 [2024-10-13 04:07:49.865304] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.734 [2024-10-13 04:07:49.878293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.109  [2024-10-13T04:07:52.204Z] Copying: 24/256 [MB] (24 MBps) [2024-10-13T04:07:53.155Z] Copying: 59/256 [MB] (34 MBps) [2024-10-13T04:07:54.109Z] Copying: 78/256 [MB] (19 MBps) [2024-10-13T04:07:55.042Z] Copying: 101/256 [MB] (22 MBps) [2024-10-13T04:07:55.976Z] Copying: 124/256 [MB] (23 MBps) [2024-10-13T04:07:56.910Z] Copying: 146/256 [MB] (21 MBps) [2024-10-13T04:07:58.284Z] Copying: 159/256 [MB] (13 MBps) [2024-10-13T04:07:59.219Z] Copying: 177/256 [MB] (17 MBps) [2024-10-13T04:08:00.154Z] Copying: 199/256 [MB] (21 MBps) [2024-10-13T04:08:01.090Z] Copying: 234/256 [MB] (35 MBps) [2024-10-13T04:08:01.090Z] Copying: 255/256 [MB] (20 MBps) [2024-10-13T04:08:01.090Z] Copying: 256/256 [MB] (average 23 MBps)[2024-10-13 04:08:00.917182] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:07.930 [2024-10-13 04:08:00.926437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.926473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:07.930 [2024-10-13 04:08:00.926486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:07.930 [2024-10-13 04:08:00.926495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:00.926515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:07.930 [2024-10-13 04:08:00.929160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.929191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:07.930 [2024-10-13 04:08:00.929206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:17:07.930 [2024-10-13 04:08:00.929213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:00.931942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.931972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:07.930 [2024-10-13 04:08:00.931983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:17:07.930 [2024-10-13 04:08:00.931990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:00.938838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.938868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:07.930 [2024-10-13 04:08:00.938877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.817 ms 00:17:07.930 [2024-10-13 04:08:00.938885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:00.946372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.946517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:07.930 [2024-10-13 04:08:00.946533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.443 ms 00:17:07.930 [2024-10-13 04:08:00.946541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:00.969919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.969953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:07.930 [2024-10-13 04:08:00.969964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.333 ms 00:17:07.930 [2024-10-13 04:08:00.969972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:00.983951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.983982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:07.930 [2024-10-13 04:08:00.983993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.946 ms 00:17:07.930 [2024-10-13 04:08:00.984000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:00.984141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:00.984155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:07.930 [2024-10-13 04:08:00.984163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:07.930 [2024-10-13 04:08:00.984170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:01.007875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:01.008017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:07.930 [2024-10-13 04:08:01.008034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.690 ms 00:17:07.930 [2024-10-13 04:08:01.008041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:01.031093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:01.031219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:07.930 [2024-10-13 04:08:01.031240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.981 ms 00:17:07.930 [2024-10-13 04:08:01.031250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:01.053578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:01.053629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:07.930 [2024-10-13 04:08:01.053639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.259 ms 00:17:07.930 [2024-10-13 04:08:01.053646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:01.075904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.930 [2024-10-13 04:08:01.075934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:07.930 [2024-10-13 04:08:01.075943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.200 ms 00:17:07.930 [2024-10-13 04:08:01.075950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.930 [2024-10-13 04:08:01.075982] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:07.930 [2024-10-13 04:08:01.075995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:07.930 [2024-10-13 04:08:01.076092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:07.931 [2024-10-13 04:08:01.076790] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:07.931 [2024-10-13 04:08:01.076801] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:17:07.931 [2024-10-13 04:08:01.076808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:07.931 [2024-10-13 04:08:01.076815] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:07.931 [2024-10-13 04:08:01.076822] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:07.931 [2024-10-13 04:08:01.076830] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:07.931 [2024-10-13 04:08:01.076836] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:07.931 [2024-10-13 04:08:01.076844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:07.932 [2024-10-13 04:08:01.076851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:07.932 [2024-10-13 04:08:01.076857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:07.932 [2024-10-13 04:08:01.076863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:07.932 [2024-10-13 04:08:01.076870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.932 [2024-10-13 04:08:01.076877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:07.932 [2024-10-13 04:08:01.076885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:17:07.932 [2024-10-13 04:08:01.076892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.089548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.190 [2024-10-13 04:08:01.089579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:08.190 [2024-10-13 04:08:01.089589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.637 ms 00:17:08.190 [2024-10-13 04:08:01.089596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.089964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.190 [2024-10-13 04:08:01.089975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:08.190 [2024-10-13 04:08:01.089984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:17:08.190 [2024-10-13 04:08:01.089995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.125157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.190 [2024-10-13 04:08:01.125189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.190 [2024-10-13 04:08:01.125198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.190 [2024-10-13 04:08:01.125205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.125288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.190 [2024-10-13 04:08:01.125297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.190 [2024-10-13 04:08:01.125306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.190 [2024-10-13 04:08:01.125316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.125353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.190 [2024-10-13 04:08:01.125361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.190 [2024-10-13 04:08:01.125369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.190 [2024-10-13 04:08:01.125376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.125392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.190 [2024-10-13 04:08:01.125399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.190 [2024-10-13 04:08:01.125406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.190 [2024-10-13 04:08:01.125413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.202629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.190 [2024-10-13 04:08:01.202667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.190 [2024-10-13 04:08:01.202677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.190 [2024-10-13 04:08:01.202684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.190 [2024-10-13 04:08:01.266427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.190 [2024-10-13 04:08:01.266468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.190 [2024-10-13 04:08:01.266477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.190 [2024-10-13 04:08:01.266489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.191 [2024-10-13 04:08:01.266549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.191 [2024-10-13 04:08:01.266558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.191 [2024-10-13 04:08:01.266566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.191 [2024-10-13 04:08:01.266573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.191 [2024-10-13 04:08:01.266600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.191 [2024-10-13 04:08:01.266608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.191 [2024-10-13 04:08:01.266639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.191 [2024-10-13 04:08:01.266647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.191 [2024-10-13 04:08:01.266736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.191 [2024-10-13 04:08:01.266746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.191 [2024-10-13 04:08:01.266754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.191 [2024-10-13 04:08:01.266761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.191 [2024-10-13 04:08:01.266790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.191 [2024-10-13 04:08:01.266799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:08.191 [2024-10-13 04:08:01.266806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.191 [2024-10-13 04:08:01.266814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.191 [2024-10-13 04:08:01.266854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.191 [2024-10-13 04:08:01.266863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.191 [2024-10-13 04:08:01.266871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.191 [2024-10-13 04:08:01.266878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.191 [2024-10-13 04:08:01.266918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.191 [2024-10-13 04:08:01.266928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.191 [2024-10-13 04:08:01.266936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.191 [2024-10-13 04:08:01.266943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.191 [2024-10-13 04:08:01.267070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.621 ms, result 0 00:17:09.124 00:17:09.124 00:17:09.124 04:08:02 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74136 00:17:09.124 04:08:02 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:09.124 04:08:02 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74136 00:17:09.124 04:08:02 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74136 ']' 00:17:09.124 04:08:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.124 04:08:02 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.124 04:08:02 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.124 04:08:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.124 04:08:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:09.124 [2024-10-13 04:08:02.260877] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:09.124 [2024-10-13 04:08:02.260998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74136 ] 00:17:09.384 [2024-10-13 04:08:02.407511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.384 [2024-10-13 04:08:02.505727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.950 04:08:03 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.950 04:08:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:09.950 04:08:03 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:10.208 [2024-10-13 04:08:03.309298] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.208 [2024-10-13 04:08:03.309359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:10.467 [2024-10-13 04:08:03.483810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.483855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.467 [2024-10-13 04:08:03.483871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.467 [2024-10-13 04:08:03.483879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.486577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.486625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.467 [2024-10-13 04:08:03.486637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.678 ms 00:17:10.467 [2024-10-13 04:08:03.486645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.486771] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.467 [2024-10-13 04:08:03.487487] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.467 [2024-10-13 04:08:03.487518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.487526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.467 [2024-10-13 04:08:03.487536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:17:10.467 [2024-10-13 04:08:03.487543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.488680] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:10.467 [2024-10-13 04:08:03.501277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.501315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:10.467 [2024-10-13 04:08:03.501327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.602 ms 00:17:10.467 [2024-10-13 04:08:03.501337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.501415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.501427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:10.467 [2024-10-13 04:08:03.501435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:10.467 [2024-10-13 04:08:03.501444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.506398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.506435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.467 [2024-10-13 04:08:03.506444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.909 ms 00:17:10.467 [2024-10-13 04:08:03.506453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.506546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.506558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.467 [2024-10-13 04:08:03.506566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:10.467 [2024-10-13 04:08:03.506574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.506597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.506611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.467 [2024-10-13 04:08:03.506638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:10.467 [2024-10-13 04:08:03.506646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.506669] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:10.467 [2024-10-13 04:08:03.509920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.509948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.467 [2024-10-13 04:08:03.509959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:17:10.467 [2024-10-13 04:08:03.509967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.510002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.510010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.467 [2024-10-13 04:08:03.510019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:10.467 [2024-10-13 04:08:03.510026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.510046] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:10.467 [2024-10-13 04:08:03.510065] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:10.467 [2024-10-13 04:08:03.510106] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:10.467 [2024-10-13 04:08:03.510121] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:10.467 [2024-10-13 04:08:03.510227] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:10.467 [2024-10-13 04:08:03.510237] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.467 [2024-10-13 04:08:03.510249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:10.467 [2024-10-13 04:08:03.510259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.467 [2024-10-13 04:08:03.510271] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.467 [2024-10-13 04:08:03.510279] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:10.467 [2024-10-13 04:08:03.510288] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.467 [2024-10-13 04:08:03.510295] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:10.467 [2024-10-13 04:08:03.510306] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:10.467 [2024-10-13 04:08:03.510313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.510322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.467 [2024-10-13 04:08:03.510329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:10.467 [2024-10-13 04:08:03.510337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.510424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.467 [2024-10-13 04:08:03.510433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.467 [2024-10-13 04:08:03.510441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:10.467 [2024-10-13 04:08:03.510450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.467 [2024-10-13 04:08:03.510559] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.467 [2024-10-13 04:08:03.510571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.467 [2024-10-13 04:08:03.510578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.467 [2024-10-13 04:08:03.510588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.467 [2024-10-13 04:08:03.510595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.467 [2024-10-13 04:08:03.510603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.467 [2024-10-13 04:08:03.510610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:10.467 [2024-10-13 04:08:03.510641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.467 [2024-10-13 04:08:03.510649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:10.467 [2024-10-13 04:08:03.510657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.467 [2024-10-13 04:08:03.510664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.467 [2024-10-13 04:08:03.510672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:10.467 [2024-10-13 04:08:03.510679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.467 [2024-10-13 04:08:03.510687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.467 [2024-10-13 04:08:03.510693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:10.467 [2024-10-13 04:08:03.510702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.467 [2024-10-13 04:08:03.510709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.467 [2024-10-13 04:08:03.510718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:10.467 [2024-10-13 04:08:03.510725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.467 [2024-10-13 04:08:03.510733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.467 [2024-10-13 04:08:03.510746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:10.467 [2024-10-13 04:08:03.510754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.467 [2024-10-13 04:08:03.510760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.467 [2024-10-13 04:08:03.510770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:10.467 [2024-10-13 04:08:03.510776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.468 [2024-10-13 04:08:03.510784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.468 [2024-10-13 04:08:03.510791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:10.468 [2024-10-13 04:08:03.510799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.468 [2024-10-13 04:08:03.510805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.468 [2024-10-13 04:08:03.510813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:10.468 [2024-10-13 04:08:03.510819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.468 [2024-10-13 04:08:03.510829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.468 [2024-10-13 04:08:03.510835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:10.468 [2024-10-13 04:08:03.510843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.468 [2024-10-13 04:08:03.510849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.468 [2024-10-13 04:08:03.510857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:10.468 [2024-10-13 04:08:03.510863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.468 [2024-10-13 04:08:03.510871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:10.468 [2024-10-13 04:08:03.510878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:10.468 [2024-10-13 04:08:03.510888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.468 [2024-10-13 04:08:03.510894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:10.468 [2024-10-13 04:08:03.510910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:10.468 [2024-10-13 04:08:03.510918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.468 [2024-10-13 04:08:03.510926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.468 [2024-10-13 04:08:03.510934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.468 [2024-10-13 04:08:03.510942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.468 [2024-10-13 04:08:03.510951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.468 [2024-10-13 04:08:03.510960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.468 [2024-10-13 04:08:03.510966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.468 [2024-10-13 04:08:03.510975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.468 [2024-10-13 04:08:03.510982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.468 [2024-10-13 04:08:03.510990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.468 [2024-10-13 04:08:03.510997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.468 [2024-10-13 04:08:03.511007] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.468 [2024-10-13 04:08:03.511016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.468 [2024-10-13 04:08:03.511028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:10.468 [2024-10-13 04:08:03.511036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:10.468 [2024-10-13 04:08:03.511044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:10.468 [2024-10-13 04:08:03.511051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:10.468 [2024-10-13 04:08:03.511060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:10.468 [2024-10-13 04:08:03.511067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:10.468 [2024-10-13 04:08:03.511075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:10.468 [2024-10-13 04:08:03.511082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:10.468 [2024-10-13 04:08:03.511091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:10.468 [2024-10-13 04:08:03.511098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:10.468 [2024-10-13 04:08:03.511106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:10.468 [2024-10-13 04:08:03.511113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:10.468 [2024-10-13 04:08:03.511122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:10.468 [2024-10-13 04:08:03.511129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:10.468 [2024-10-13 04:08:03.511137] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.468 [2024-10-13 04:08:03.511145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.468 [2024-10-13 04:08:03.511156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.468 [2024-10-13 04:08:03.511164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.468 [2024-10-13 04:08:03.511172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.468 [2024-10-13 04:08:03.511179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.468 [2024-10-13 04:08:03.511189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.511196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.468 [2024-10-13 04:08:03.511205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:17:10.468 [2024-10-13 04:08:03.511212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.537093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.537128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.468 [2024-10-13 04:08:03.537140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.824 ms 00:17:10.468 [2024-10-13 04:08:03.537148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.537265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.537277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:10.468 [2024-10-13 04:08:03.537286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:10.468 [2024-10-13 04:08:03.537293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.567630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.567660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.468 [2024-10-13 04:08:03.567672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.302 ms 00:17:10.468 [2024-10-13 04:08:03.567682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.567755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.567769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.468 [2024-10-13 04:08:03.567784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.468 [2024-10-13 04:08:03.567794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.568161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.568187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.468 [2024-10-13 04:08:03.568202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:17:10.468 [2024-10-13 04:08:03.568214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.568390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.568410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.468 [2024-10-13 04:08:03.568424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:10.468 [2024-10-13 04:08:03.568436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.584263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.584300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.468 [2024-10-13 04:08:03.584314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.794 ms 00:17:10.468 [2024-10-13 04:08:03.584322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.597055] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:10.468 [2024-10-13 04:08:03.597088] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:10.468 [2024-10-13 04:08:03.597103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.597112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:10.468 [2024-10-13 04:08:03.597123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:17:10.468 [2024-10-13 04:08:03.597131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.468 [2024-10-13 04:08:03.621188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.468 [2024-10-13 04:08:03.621221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:10.468 [2024-10-13 04:08:03.621233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.984 ms 00:17:10.468 [2024-10-13 04:08:03.621241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.632408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.632438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:10.727 [2024-10-13 04:08:03.632451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.110 ms 00:17:10.727 [2024-10-13 04:08:03.632458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.644160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.644187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:10.727 [2024-10-13 04:08:03.644199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.637 ms 00:17:10.727 [2024-10-13 04:08:03.644206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.644843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.644862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:10.727 [2024-10-13 04:08:03.644872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:10.727 [2024-10-13 04:08:03.644880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.712155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.712205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:10.727 [2024-10-13 04:08:03.712221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.248 ms 00:17:10.727 [2024-10-13 04:08:03.712229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.723160] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:10.727 [2024-10-13 04:08:03.737019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.737063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:10.727 [2024-10-13 04:08:03.737077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.699 ms 00:17:10.727 [2024-10-13 04:08:03.737087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.737172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.737185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:10.727 [2024-10-13 04:08:03.737194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:10.727 [2024-10-13 04:08:03.737203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.737258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.737268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:10.727 [2024-10-13 04:08:03.737276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:10.727 [2024-10-13 04:08:03.737285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.737308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.737320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:10.727 [2024-10-13 04:08:03.737328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:10.727 [2024-10-13 04:08:03.737339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.737369] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:10.727 [2024-10-13 04:08:03.737381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.737389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:10.727 [2024-10-13 04:08:03.737399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:10.727 [2024-10-13 04:08:03.737409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.760567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.760705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:10.727 [2024-10-13 04:08:03.760727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.134 ms 00:17:10.727 [2024-10-13 04:08:03.760735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.760826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.727 [2024-10-13 04:08:03.760837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:10.727 [2024-10-13 04:08:03.760847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:10.727 [2024-10-13 04:08:03.760855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.727 [2024-10-13 04:08:03.761708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.727 [2024-10-13 04:08:03.764833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.607 ms, result 0 00:17:10.728 [2024-10-13 04:08:03.765841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:10.728 Some configs were skipped because the RPC state that can call them passed over. 00:17:10.728 04:08:03 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:10.986 [2024-10-13 04:08:03.996640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.986 [2024-10-13 04:08:03.996811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:10.986 [2024-10-13 04:08:03.997238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:17:10.986 [2024-10-13 04:08:03.997289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.986 [2024-10-13 04:08:03.997360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.608 ms, result 0 00:17:10.986 true 00:17:10.986 04:08:04 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:11.244 [2024-10-13 04:08:04.201721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.244 [2024-10-13 04:08:04.201877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:11.244 [2024-10-13 04:08:04.201932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:17:11.244 [2024-10-13 04:08:04.201955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.244 [2024-10-13 04:08:04.202009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.994 ms, result 0 00:17:11.244 true 00:17:11.244 04:08:04 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74136 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74136 ']' 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74136 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74136 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:11.244 killing process with pid 74136 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74136' 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74136 00:17:11.244 04:08:04 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74136 00:17:11.810 [2024-10-13 04:08:04.938944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.810 [2024-10-13 04:08:04.938994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:11.810 [2024-10-13 04:08:04.939008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:11.810 [2024-10-13 04:08:04.939017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.810 [2024-10-13 04:08:04.939040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:11.810 [2024-10-13 04:08:04.941663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.810 [2024-10-13 04:08:04.941692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:11.810 [2024-10-13 04:08:04.941708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:17:11.810 [2024-10-13 04:08:04.941716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.810 [2024-10-13 04:08:04.942015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.810 [2024-10-13 04:08:04.942025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:11.810 [2024-10-13 04:08:04.942035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:17:11.810 [2024-10-13 04:08:04.942043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.810 [2024-10-13 04:08:04.946322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.810 [2024-10-13 04:08:04.946350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:11.810 [2024-10-13 04:08:04.946361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:17:11.810 [2024-10-13 04:08:04.946369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.810 [2024-10-13 04:08:04.953264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.810 [2024-10-13 04:08:04.953390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:11.810 [2024-10-13 04:08:04.953412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.856 ms 00:17:11.810 [2024-10-13 04:08:04.953420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.810 [2024-10-13 04:08:04.963685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.810 [2024-10-13 04:08:04.963795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:11.810 [2024-10-13 04:08:04.963815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.211 ms 00:17:11.810 [2024-10-13 04:08:04.963828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.070 [2024-10-13 04:08:04.971290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.070 [2024-10-13 04:08:04.971398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.070 [2024-10-13 04:08:04.971417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.425 ms 00:17:12.070 [2024-10-13 04:08:04.971427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.070 [2024-10-13 04:08:04.971553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.070 [2024-10-13 04:08:04.971562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.070 [2024-10-13 04:08:04.971572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:12.070 [2024-10-13 04:08:04.971579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.070 [2024-10-13 04:08:04.981936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.070 [2024-10-13 04:08:04.982041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:12.070 [2024-10-13 04:08:04.982058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.336 ms 00:17:12.070 [2024-10-13 04:08:04.982065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.070 [2024-10-13 04:08:04.992363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.070 [2024-10-13 04:08:04.992392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:12.070 [2024-10-13 04:08:04.992406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.263 ms 00:17:12.070 [2024-10-13 04:08:04.992413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.070 [2024-10-13 04:08:05.001824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.070 [2024-10-13 04:08:05.001862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.070 [2024-10-13 04:08:05.001873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.372 ms 00:17:12.070 [2024-10-13 04:08:05.001880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.070 [2024-10-13 04:08:05.011080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.070 [2024-10-13 04:08:05.011109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.070 [2024-10-13 04:08:05.011120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.138 ms 00:17:12.070 [2024-10-13 04:08:05.011126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.070 [2024-10-13 04:08:05.011172] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.070 [2024-10-13 04:08:05.011185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.070 [2024-10-13 04:08:05.011797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.011999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.012023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.012031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.012040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.071 [2024-10-13 04:08:05.012057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.071 [2024-10-13 04:08:05.012068] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:17:12.071 [2024-10-13 04:08:05.012080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.071 [2024-10-13 04:08:05.012091] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.071 [2024-10-13 04:08:05.012100] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.071 [2024-10-13 04:08:05.012110] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.071 [2024-10-13 04:08:05.012117] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.071 [2024-10-13 04:08:05.012126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.071 [2024-10-13 04:08:05.012133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.071 [2024-10-13 04:08:05.012141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.071 [2024-10-13 04:08:05.012147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.071 [2024-10-13 04:08:05.012156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.071 [2024-10-13 04:08:05.012163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.071 [2024-10-13 04:08:05.012173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:17:12.071 [2024-10-13 04:08:05.012179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.025209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.071 [2024-10-13 04:08:05.025333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.071 [2024-10-13 04:08:05.025395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.007 ms 00:17:12.071 [2024-10-13 04:08:05.025418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.025822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.071 [2024-10-13 04:08:05.025864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.071 [2024-10-13 04:08:05.025925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:17:12.071 [2024-10-13 04:08:05.025947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.069839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.069954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.071 [2024-10-13 04:08:05.070007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.070030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.071238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.071331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.071 [2024-10-13 04:08:05.071380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.071401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.071463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.071486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.071 [2024-10-13 04:08:05.071510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.071528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.071558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.071578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.071 [2024-10-13 04:08:05.071661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.071685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.149104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.149239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.071 [2024-10-13 04:08:05.149289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.149311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.212817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.212948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.071 [2024-10-13 04:08:05.212998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.213020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.213109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.213136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.071 [2024-10-13 04:08:05.213160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.213179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.213220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.213241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.071 [2024-10-13 04:08:05.213263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.213326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.213437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.213461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.071 [2024-10-13 04:08:05.213485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.213504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.213550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.213573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.071 [2024-10-13 04:08:05.213595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.213632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.213684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.213753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.071 [2024-10-13 04:08:05.213772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.071 [2024-10-13 04:08:05.213780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.071 [2024-10-13 04:08:05.213829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.071 [2024-10-13 04:08:05.213838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.072 [2024-10-13 04:08:05.213848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.072 [2024-10-13 04:08:05.213855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.072 [2024-10-13 04:08:05.213984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.018 ms, result 0 00:17:12.639 04:08:05 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:12.639 04:08:05 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.938 [2024-10-13 04:08:05.844364] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:12.938 [2024-10-13 04:08:05.844484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74189 ] 00:17:12.938 [2024-10-13 04:08:05.993068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.938 [2024-10-13 04:08:06.070988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.197 [2024-10-13 04:08:06.277008] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.197 [2024-10-13 04:08:06.277060] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:13.457 [2024-10-13 04:08:06.426973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.427022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:13.457 [2024-10-13 04:08:06.427035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:13.457 [2024-10-13 04:08:06.427043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.429677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.429824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.457 [2024-10-13 04:08:06.429841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:17:13.457 [2024-10-13 04:08:06.429849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.430167] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:13.457 [2024-10-13 04:08:06.430907] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:13.457 [2024-10-13 04:08:06.430932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.430942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.457 [2024-10-13 04:08:06.430951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:17:13.457 [2024-10-13 04:08:06.430958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.432162] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:13.457 [2024-10-13 04:08:06.444244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.444276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:13.457 [2024-10-13 04:08:06.444288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.083 ms 00:17:13.457 [2024-10-13 04:08:06.444299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.444381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.444393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:13.457 [2024-10-13 04:08:06.444401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:13.457 [2024-10-13 04:08:06.444408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.449141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.449288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.457 [2024-10-13 04:08:06.449302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:17:13.457 [2024-10-13 04:08:06.449314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.449402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.449417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.457 [2024-10-13 04:08:06.449426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:13.457 [2024-10-13 04:08:06.449433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.449456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.449464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:13.457 [2024-10-13 04:08:06.449472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:13.457 [2024-10-13 04:08:06.449481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.449499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:13.457 [2024-10-13 04:08:06.452761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.452788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.457 [2024-10-13 04:08:06.452797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:17:13.457 [2024-10-13 04:08:06.452804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.452838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.452846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:13.457 [2024-10-13 04:08:06.452854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:13.457 [2024-10-13 04:08:06.452861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.452878] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:13.457 [2024-10-13 04:08:06.452896] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:13.457 [2024-10-13 04:08:06.452931] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:13.457 [2024-10-13 04:08:06.452946] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:13.457 [2024-10-13 04:08:06.453047] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:13.457 [2024-10-13 04:08:06.453061] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:13.457 [2024-10-13 04:08:06.453071] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:13.457 [2024-10-13 04:08:06.453080] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:13.457 [2024-10-13 04:08:06.453089] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:13.457 [2024-10-13 04:08:06.453097] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:13.457 [2024-10-13 04:08:06.453106] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:13.457 [2024-10-13 04:08:06.453113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:13.457 [2024-10-13 04:08:06.453120] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:13.457 [2024-10-13 04:08:06.453128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.453139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:13.457 [2024-10-13 04:08:06.453147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:17:13.457 [2024-10-13 04:08:06.453154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.453242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.457 [2024-10-13 04:08:06.453250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:13.457 [2024-10-13 04:08:06.453257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:13.457 [2024-10-13 04:08:06.453267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.457 [2024-10-13 04:08:06.453364] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:13.457 [2024-10-13 04:08:06.453373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:13.458 [2024-10-13 04:08:06.453381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:13.458 [2024-10-13 04:08:06.453402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:13.458 [2024-10-13 04:08:06.453422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.458 [2024-10-13 04:08:06.453435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:13.458 [2024-10-13 04:08:06.453442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:13.458 [2024-10-13 04:08:06.453448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:13.458 [2024-10-13 04:08:06.453460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:13.458 [2024-10-13 04:08:06.453467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:13.458 [2024-10-13 04:08:06.453473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:13.458 [2024-10-13 04:08:06.453486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:13.458 [2024-10-13 04:08:06.453505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:13.458 [2024-10-13 04:08:06.453523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:13.458 [2024-10-13 04:08:06.453542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:13.458 [2024-10-13 04:08:06.453561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:13.458 [2024-10-13 04:08:06.453581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.458 [2024-10-13 04:08:06.453593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:13.458 [2024-10-13 04:08:06.453600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:13.458 [2024-10-13 04:08:06.453606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:13.458 [2024-10-13 04:08:06.453630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:13.458 [2024-10-13 04:08:06.453637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:13.458 [2024-10-13 04:08:06.453644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:13.458 [2024-10-13 04:08:06.453657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:13.458 [2024-10-13 04:08:06.453663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453669] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:13.458 [2024-10-13 04:08:06.453677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:13.458 [2024-10-13 04:08:06.453684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:13.458 [2024-10-13 04:08:06.453699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:13.458 [2024-10-13 04:08:06.453706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:13.458 [2024-10-13 04:08:06.453713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:13.458 [2024-10-13 04:08:06.453720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:13.458 [2024-10-13 04:08:06.453726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:13.458 [2024-10-13 04:08:06.453733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:13.458 [2024-10-13 04:08:06.453741] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:13.458 [2024-10-13 04:08:06.453752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.458 [2024-10-13 04:08:06.453760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:13.458 [2024-10-13 04:08:06.453767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:13.458 [2024-10-13 04:08:06.453774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:13.458 [2024-10-13 04:08:06.453781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:13.458 [2024-10-13 04:08:06.453788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:13.458 [2024-10-13 04:08:06.453795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:13.458 [2024-10-13 04:08:06.453802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:13.458 [2024-10-13 04:08:06.453809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:13.458 [2024-10-13 04:08:06.453816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:13.458 [2024-10-13 04:08:06.453823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:13.458 [2024-10-13 04:08:06.453830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:13.458 [2024-10-13 04:08:06.453837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:13.458 [2024-10-13 04:08:06.453843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:13.458 [2024-10-13 04:08:06.453850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:13.458 [2024-10-13 04:08:06.453857] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:13.458 [2024-10-13 04:08:06.453865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:13.458 [2024-10-13 04:08:06.453873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:13.458 [2024-10-13 04:08:06.453880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:13.458 [2024-10-13 04:08:06.453887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:13.458 [2024-10-13 04:08:06.453893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:13.458 [2024-10-13 04:08:06.453901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.458 [2024-10-13 04:08:06.453908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:13.458 [2024-10-13 04:08:06.453915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:17:13.458 [2024-10-13 04:08:06.453925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.458 [2024-10-13 04:08:06.479411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.458 [2024-10-13 04:08:06.479444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:13.458 [2024-10-13 04:08:06.479454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.423 ms 00:17:13.458 [2024-10-13 04:08:06.479461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.458 [2024-10-13 04:08:06.479568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.458 [2024-10-13 04:08:06.479578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:13.458 [2024-10-13 04:08:06.479586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:13.458 [2024-10-13 04:08:06.479597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.458 [2024-10-13 04:08:06.522270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.458 [2024-10-13 04:08:06.522307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:13.458 [2024-10-13 04:08:06.522319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.635 ms 00:17:13.458 [2024-10-13 04:08:06.522327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.458 [2024-10-13 04:08:06.522418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.458 [2024-10-13 04:08:06.522429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:13.458 [2024-10-13 04:08:06.522438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:13.458 [2024-10-13 04:08:06.522446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.458 [2024-10-13 04:08:06.522777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.458 [2024-10-13 04:08:06.522793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:13.458 [2024-10-13 04:08:06.522802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:13.458 [2024-10-13 04:08:06.522810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.458 [2024-10-13 04:08:06.522936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.458 [2024-10-13 04:08:06.522947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:13.458 [2024-10-13 04:08:06.522955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:13.458 [2024-10-13 04:08:06.522962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.459 [2024-10-13 04:08:06.536172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.459 [2024-10-13 04:08:06.536202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:13.459 [2024-10-13 04:08:06.536211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.190 ms 00:17:13.459 [2024-10-13 04:08:06.536218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.459 [2024-10-13 04:08:06.548393] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:13.459 [2024-10-13 04:08:06.548524] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:13.459 [2024-10-13 04:08:06.548538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.459 [2024-10-13 04:08:06.548546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:13.459 [2024-10-13 04:08:06.548555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:17:13.459 [2024-10-13 04:08:06.548562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.459 [2024-10-13 04:08:06.572677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.459 [2024-10-13 04:08:06.572735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:13.459 [2024-10-13 04:08:06.572749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.032 ms 00:17:13.459 [2024-10-13 04:08:06.572757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.459 [2024-10-13 04:08:06.584234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.459 [2024-10-13 04:08:06.584266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:13.459 [2024-10-13 04:08:06.584275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.399 ms 00:17:13.459 [2024-10-13 04:08:06.584282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.459 [2024-10-13 04:08:06.595481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.459 [2024-10-13 04:08:06.595604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:13.459 [2024-10-13 04:08:06.595629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.136 ms 00:17:13.459 [2024-10-13 04:08:06.595636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.459 [2024-10-13 04:08:06.596253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.459 [2024-10-13 04:08:06.596275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:13.459 [2024-10-13 04:08:06.596284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:17:13.459 [2024-10-13 04:08:06.596291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.718 [2024-10-13 04:08:06.650366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.718 [2024-10-13 04:08:06.650508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:13.718 [2024-10-13 04:08:06.650526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.052 ms 00:17:13.718 [2024-10-13 04:08:06.650535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.718 [2024-10-13 04:08:06.660791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:13.718 [2024-10-13 04:08:06.674425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.718 [2024-10-13 04:08:06.674460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:13.718 [2024-10-13 04:08:06.674471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.788 ms 00:17:13.718 [2024-10-13 04:08:06.674479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.718 [2024-10-13 04:08:06.674554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.718 [2024-10-13 04:08:06.674567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:13.718 [2024-10-13 04:08:06.674576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:13.718 [2024-10-13 04:08:06.674583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.718 [2024-10-13 04:08:06.674650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.718 [2024-10-13 04:08:06.674660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:13.718 [2024-10-13 04:08:06.674668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:13.718 [2024-10-13 04:08:06.674676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-10-13 04:08:06.674700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-10-13 04:08:06.674712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:13.719 [2024-10-13 04:08:06.674721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:13.719 [2024-10-13 04:08:06.674729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-10-13 04:08:06.674756] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:13.719 [2024-10-13 04:08:06.674765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-10-13 04:08:06.674772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:13.719 [2024-10-13 04:08:06.674779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:13.719 [2024-10-13 04:08:06.674787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-10-13 04:08:06.698464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-10-13 04:08:06.698504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:13.719 [2024-10-13 04:08:06.698515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.655 ms 00:17:13.719 [2024-10-13 04:08:06.698523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-10-13 04:08:06.698630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-10-13 04:08:06.698641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:13.719 [2024-10-13 04:08:06.698650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:13.719 [2024-10-13 04:08:06.698657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-10-13 04:08:06.699823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.719 [2024-10-13 04:08:06.702827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.552 ms, result 0 00:17:13.719 [2024-10-13 04:08:06.703369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:13.719 [2024-10-13 04:08:06.716403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.653  [2024-10-13T04:08:08.748Z] Copying: 46/256 [MB] (46 MBps) [2024-10-13T04:08:10.124Z] Copying: 82/256 [MB] (36 MBps) [2024-10-13T04:08:11.056Z] Copying: 111/256 [MB] (28 MBps) [2024-10-13T04:08:11.989Z] Copying: 126/256 [MB] (14 MBps) [2024-10-13T04:08:12.924Z] Copying: 157/256 [MB] (30 MBps) [2024-10-13T04:08:13.858Z] Copying: 180/256 [MB] (23 MBps) [2024-10-13T04:08:14.793Z] Copying: 205/256 [MB] (24 MBps) [2024-10-13T04:08:15.776Z] Copying: 231/256 [MB] (26 MBps) [2024-10-13T04:08:16.035Z] Copying: 252/256 [MB] (20 MBps) [2024-10-13T04:08:16.035Z] Copying: 256/256 [MB] (average 28 MBps)[2024-10-13 04:08:15.849003] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.875 [2024-10-13 04:08:15.858267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.858379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.875 [2024-10-13 04:08:15.858437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.875 [2024-10-13 04:08:15.858461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.858497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:22.875 [2024-10-13 04:08:15.861104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.861207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.875 [2024-10-13 04:08:15.861258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:17:22.875 [2024-10-13 04:08:15.861280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.861541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.861633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.875 [2024-10-13 04:08:15.861655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:17:22.875 [2024-10-13 04:08:15.861770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.865478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.865552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.875 [2024-10-13 04:08:15.865600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:17:22.875 [2024-10-13 04:08:15.866156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.873089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.873180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:22.875 [2024-10-13 04:08:15.873226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.895 ms 00:17:22.875 [2024-10-13 04:08:15.873248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.896972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.897096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.875 [2024-10-13 04:08:15.897145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.659 ms 00:17:22.875 [2024-10-13 04:08:15.897166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.911121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.911225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.875 [2024-10-13 04:08:15.911273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.915 ms 00:17:22.875 [2024-10-13 04:08:15.911301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.911440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.911469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.875 [2024-10-13 04:08:15.911525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:22.875 [2024-10-13 04:08:15.911548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.935130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.935234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:22.875 [2024-10-13 04:08:15.935280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.546 ms 00:17:22.875 [2024-10-13 04:08:15.935300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.958914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.959021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:22.875 [2024-10-13 04:08:15.959068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.574 ms 00:17:22.875 [2024-10-13 04:08:15.959088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:15.981800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:15.981906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.875 [2024-10-13 04:08:15.981952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.607 ms 00:17:22.875 [2024-10-13 04:08:15.981974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:16.004333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.875 [2024-10-13 04:08:16.004439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.875 [2024-10-13 04:08:16.004485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.232 ms 00:17:22.875 [2024-10-13 04:08:16.004505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.875 [2024-10-13 04:08:16.004609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.875 [2024-10-13 04:08:16.004649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.004685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.004764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.004798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.004827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.004855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.004965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.004998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.005026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.005055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.005083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.875 [2024-10-13 04:08:16.005152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.005976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.006988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.876 [2024-10-13 04:08:16.007606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.876 [2024-10-13 04:08:16.007624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:17:22.876 [2024-10-13 04:08:16.007632] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.876 [2024-10-13 04:08:16.007640] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.876 [2024-10-13 04:08:16.007647] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.876 [2024-10-13 04:08:16.007654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.876 [2024-10-13 04:08:16.007661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.876 [2024-10-13 04:08:16.007669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.877 [2024-10-13 04:08:16.007676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.877 [2024-10-13 04:08:16.007682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.877 [2024-10-13 04:08:16.007689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.877 [2024-10-13 04:08:16.007696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.877 [2024-10-13 04:08:16.007703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.877 [2024-10-13 04:08:16.007711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:17:22.877 [2024-10-13 04:08:16.007721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.877 [2024-10-13 04:08:16.019938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.877 [2024-10-13 04:08:16.019965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.877 [2024-10-13 04:08:16.019975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.187 ms 00:17:22.877 [2024-10-13 04:08:16.019982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.877 [2024-10-13 04:08:16.020342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.877 [2024-10-13 04:08:16.020369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.877 [2024-10-13 04:08:16.020383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:17:22.877 [2024-10-13 04:08:16.020390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.055219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.055255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:23.135 [2024-10-13 04:08:16.055264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.055272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.055342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.055352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:23.135 [2024-10-13 04:08:16.055362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.055369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.055408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.055416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:23.135 [2024-10-13 04:08:16.055424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.055431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.055447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.055454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:23.135 [2024-10-13 04:08:16.055462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.055471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.132896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.133065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:23.135 [2024-10-13 04:08:16.133081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.133089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.195461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.195500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:23.135 [2024-10-13 04:08:16.195514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.195522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.195564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.195572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.135 [2024-10-13 04:08:16.195581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.195588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.135 [2024-10-13 04:08:16.195634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.135 [2024-10-13 04:08:16.195643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.135 [2024-10-13 04:08:16.195651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.135 [2024-10-13 04:08:16.195659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.136 [2024-10-13 04:08:16.195760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.136 [2024-10-13 04:08:16.195769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.136 [2024-10-13 04:08:16.195778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.136 [2024-10-13 04:08:16.195784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.136 [2024-10-13 04:08:16.195813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.136 [2024-10-13 04:08:16.195822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:23.136 [2024-10-13 04:08:16.195830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.136 [2024-10-13 04:08:16.195837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.136 [2024-10-13 04:08:16.195875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.136 [2024-10-13 04:08:16.195883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.136 [2024-10-13 04:08:16.195891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.136 [2024-10-13 04:08:16.195899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.136 [2024-10-13 04:08:16.195940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:23.136 [2024-10-13 04:08:16.195949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.136 [2024-10-13 04:08:16.195956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:23.136 [2024-10-13 04:08:16.195964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.136 [2024-10-13 04:08:16.196100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.836 ms, result 0 00:17:23.702 00:17:23.702 00:17:23.961 04:08:16 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:23.961 04:08:16 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:24.528 04:08:17 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.528 [2024-10-13 04:08:17.484094] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:24.528 [2024-10-13 04:08:17.484215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74315 ] 00:17:24.528 [2024-10-13 04:08:17.635887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.787 [2024-10-13 04:08:17.732819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.046 [2024-10-13 04:08:17.985477] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.046 [2024-10-13 04:08:17.985537] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:25.046 [2024-10-13 04:08:18.141983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.046 [2024-10-13 04:08:18.142143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.046 [2024-10-13 04:08:18.142162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:25.046 [2024-10-13 04:08:18.142171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.046 [2024-10-13 04:08:18.145277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.046 [2024-10-13 04:08:18.145419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.046 [2024-10-13 04:08:18.145437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:17:25.046 [2024-10-13 04:08:18.145447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.046 [2024-10-13 04:08:18.145828] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.046 [2024-10-13 04:08:18.146567] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.046 [2024-10-13 04:08:18.146604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.046 [2024-10-13 04:08:18.146627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.047 [2024-10-13 04:08:18.146638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:17:25.047 [2024-10-13 04:08:18.146645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.147762] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:25.047 [2024-10-13 04:08:18.160548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.160704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:25.047 [2024-10-13 04:08:18.160722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.787 ms 00:17:25.047 [2024-10-13 04:08:18.160734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.161112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.161149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:25.047 [2024-10-13 04:08:18.161160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:25.047 [2024-10-13 04:08:18.161168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.166079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.166115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.047 [2024-10-13 04:08:18.166125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.868 ms 00:17:25.047 [2024-10-13 04:08:18.166132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.166215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.166225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.047 [2024-10-13 04:08:18.166233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:25.047 [2024-10-13 04:08:18.166240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.166265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.166273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.047 [2024-10-13 04:08:18.166281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:25.047 [2024-10-13 04:08:18.166291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.166312] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:25.047 [2024-10-13 04:08:18.169723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.169751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.047 [2024-10-13 04:08:18.169760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.417 ms 00:17:25.047 [2024-10-13 04:08:18.169767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.169800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.169808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.047 [2024-10-13 04:08:18.169816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:25.047 [2024-10-13 04:08:18.169823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.169840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:25.047 [2024-10-13 04:08:18.169857] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:25.047 [2024-10-13 04:08:18.169892] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:25.047 [2024-10-13 04:08:18.169907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:25.047 [2024-10-13 04:08:18.170008] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:25.047 [2024-10-13 04:08:18.170018] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.047 [2024-10-13 04:08:18.170029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:25.047 [2024-10-13 04:08:18.170038] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170047] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170054] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:25.047 [2024-10-13 04:08:18.170064] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.047 [2024-10-13 04:08:18.170071] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:25.047 [2024-10-13 04:08:18.170078] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:25.047 [2024-10-13 04:08:18.170085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.170093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.047 [2024-10-13 04:08:18.170100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:17:25.047 [2024-10-13 04:08:18.170107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.170193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.047 [2024-10-13 04:08:18.170201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.047 [2024-10-13 04:08:18.170209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:25.047 [2024-10-13 04:08:18.170218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.047 [2024-10-13 04:08:18.170328] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.047 [2024-10-13 04:08:18.170339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.047 [2024-10-13 04:08:18.170347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.047 [2024-10-13 04:08:18.170369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.047 [2024-10-13 04:08:18.170389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.047 [2024-10-13 04:08:18.170403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.047 [2024-10-13 04:08:18.170410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:25.047 [2024-10-13 04:08:18.170416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.047 [2024-10-13 04:08:18.170429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.047 [2024-10-13 04:08:18.170436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:25.047 [2024-10-13 04:08:18.170442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.047 [2024-10-13 04:08:18.170454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.047 [2024-10-13 04:08:18.170473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.047 [2024-10-13 04:08:18.170492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.047 [2024-10-13 04:08:18.170511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.047 [2024-10-13 04:08:18.170529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.047 [2024-10-13 04:08:18.170550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.047 [2024-10-13 04:08:18.170563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.047 [2024-10-13 04:08:18.170569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:25.047 [2024-10-13 04:08:18.170575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.047 [2024-10-13 04:08:18.170582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:25.047 [2024-10-13 04:08:18.170589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:25.047 [2024-10-13 04:08:18.170595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:25.047 [2024-10-13 04:08:18.170608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:25.047 [2024-10-13 04:08:18.170634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170641] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.047 [2024-10-13 04:08:18.170649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.047 [2024-10-13 04:08:18.170656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.047 [2024-10-13 04:08:18.170663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.047 [2024-10-13 04:08:18.170671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.047 [2024-10-13 04:08:18.170678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.048 [2024-10-13 04:08:18.170684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.048 [2024-10-13 04:08:18.170691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.048 [2024-10-13 04:08:18.170697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.048 [2024-10-13 04:08:18.170704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.048 [2024-10-13 04:08:18.170712] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.048 [2024-10-13 04:08:18.170724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.048 [2024-10-13 04:08:18.170732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:25.048 [2024-10-13 04:08:18.170739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:25.048 [2024-10-13 04:08:18.170746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:25.048 [2024-10-13 04:08:18.170753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:25.048 [2024-10-13 04:08:18.170760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:25.048 [2024-10-13 04:08:18.170767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:25.048 [2024-10-13 04:08:18.170774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:25.048 [2024-10-13 04:08:18.170781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:25.048 [2024-10-13 04:08:18.170789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:25.048 [2024-10-13 04:08:18.170796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:25.048 [2024-10-13 04:08:18.170803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:25.048 [2024-10-13 04:08:18.170810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:25.048 [2024-10-13 04:08:18.170817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:25.048 [2024-10-13 04:08:18.170824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:25.048 [2024-10-13 04:08:18.170831] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.048 [2024-10-13 04:08:18.170839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.048 [2024-10-13 04:08:18.170847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.048 [2024-10-13 04:08:18.170854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.048 [2024-10-13 04:08:18.170861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.048 [2024-10-13 04:08:18.170868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.048 [2024-10-13 04:08:18.170876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.048 [2024-10-13 04:08:18.170884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.048 [2024-10-13 04:08:18.170891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:17:25.048 [2024-10-13 04:08:18.170900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.048 [2024-10-13 04:08:18.196850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.048 [2024-10-13 04:08:18.196883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:25.048 [2024-10-13 04:08:18.196893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.900 ms 00:17:25.048 [2024-10-13 04:08:18.196901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.048 [2024-10-13 04:08:18.197015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.048 [2024-10-13 04:08:18.197025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:25.048 [2024-10-13 04:08:18.197032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:25.048 [2024-10-13 04:08:18.197043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.238553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.238589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.307 [2024-10-13 04:08:18.238601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.489 ms 00:17:25.307 [2024-10-13 04:08:18.238609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.238712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.238724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.307 [2024-10-13 04:08:18.238733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.307 [2024-10-13 04:08:18.238741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.239064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.239083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.307 [2024-10-13 04:08:18.239092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:25.307 [2024-10-13 04:08:18.239099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.239226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.239237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.307 [2024-10-13 04:08:18.239245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:25.307 [2024-10-13 04:08:18.239252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.252596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.252641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.307 [2024-10-13 04:08:18.252651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.324 ms 00:17:25.307 [2024-10-13 04:08:18.252659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.265296] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:25.307 [2024-10-13 04:08:18.265328] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:25.307 [2024-10-13 04:08:18.265340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.265347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:25.307 [2024-10-13 04:08:18.265355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.586 ms 00:17:25.307 [2024-10-13 04:08:18.265362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.289431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.289470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:25.307 [2024-10-13 04:08:18.289481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.001 ms 00:17:25.307 [2024-10-13 04:08:18.289488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.301416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.301446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:25.307 [2024-10-13 04:08:18.301456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.862 ms 00:17:25.307 [2024-10-13 04:08:18.301462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.313229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.313258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:25.307 [2024-10-13 04:08:18.313268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.706 ms 00:17:25.307 [2024-10-13 04:08:18.313274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.313908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.313940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:25.307 [2024-10-13 04:08:18.313950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:17:25.307 [2024-10-13 04:08:18.313957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.368375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.368546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:25.307 [2024-10-13 04:08:18.368565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.395 ms 00:17:25.307 [2024-10-13 04:08:18.368574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.379003] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:25.307 [2024-10-13 04:08:18.392818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.392853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:25.307 [2024-10-13 04:08:18.392865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.136 ms 00:17:25.307 [2024-10-13 04:08:18.392872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.392948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.392960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:25.307 [2024-10-13 04:08:18.392969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:25.307 [2024-10-13 04:08:18.392976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.307 [2024-10-13 04:08:18.393021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.307 [2024-10-13 04:08:18.393030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:25.307 [2024-10-13 04:08:18.393037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:25.308 [2024-10-13 04:08:18.393045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.308 [2024-10-13 04:08:18.393067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.308 [2024-10-13 04:08:18.393079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:25.308 [2024-10-13 04:08:18.393088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:25.308 [2024-10-13 04:08:18.393095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.308 [2024-10-13 04:08:18.393123] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:25.308 [2024-10-13 04:08:18.393132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.308 [2024-10-13 04:08:18.393140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:25.308 [2024-10-13 04:08:18.393148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:25.308 [2024-10-13 04:08:18.393155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.308 [2024-10-13 04:08:18.417161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.308 [2024-10-13 04:08:18.417199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:25.308 [2024-10-13 04:08:18.417209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.984 ms 00:17:25.308 [2024-10-13 04:08:18.417217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.308 [2024-10-13 04:08:18.417302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.308 [2024-10-13 04:08:18.417313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:25.308 [2024-10-13 04:08:18.417321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:25.308 [2024-10-13 04:08:18.417328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.308 [2024-10-13 04:08:18.418116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.308 [2024-10-13 04:08:18.421051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.825 ms, result 0 00:17:25.308 [2024-10-13 04:08:18.421904] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.308 [2024-10-13 04:08:18.434674] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.565  [2024-10-13T04:08:18.725Z] Copying: 4096/4096 [kB] (average 19 MBps)[2024-10-13 04:08:18.638866] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:25.565 [2024-10-13 04:08:18.647542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.647572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:25.565 [2024-10-13 04:08:18.647583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.565 [2024-10-13 04:08:18.647591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.565 [2024-10-13 04:08:18.647624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:25.565 [2024-10-13 04:08:18.650333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.650366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:25.565 [2024-10-13 04:08:18.650376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:17:25.565 [2024-10-13 04:08:18.650384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.565 [2024-10-13 04:08:18.652818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.652849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:25.565 [2024-10-13 04:08:18.652858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:17:25.565 [2024-10-13 04:08:18.652865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.565 [2024-10-13 04:08:18.657213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.657238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:25.565 [2024-10-13 04:08:18.657247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.332 ms 00:17:25.565 [2024-10-13 04:08:18.657258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.565 [2024-10-13 04:08:18.664423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.664450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:25.565 [2024-10-13 04:08:18.664460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.138 ms 00:17:25.565 [2024-10-13 04:08:18.664468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.565 [2024-10-13 04:08:18.687662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.687786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:25.565 [2024-10-13 04:08:18.687801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.140 ms 00:17:25.565 [2024-10-13 04:08:18.687808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.565 [2024-10-13 04:08:18.701801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.701832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:25.565 [2024-10-13 04:08:18.701844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.962 ms 00:17:25.565 [2024-10-13 04:08:18.701856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.565 [2024-10-13 04:08:18.701985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.565 [2024-10-13 04:08:18.701995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:25.565 [2024-10-13 04:08:18.702003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:25.565 [2024-10-13 04:08:18.702011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.824 [2024-10-13 04:08:18.725867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.824 [2024-10-13 04:08:18.725896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:25.824 [2024-10-13 04:08:18.725906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.834 ms 00:17:25.824 [2024-10-13 04:08:18.725913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.824 [2024-10-13 04:08:18.748564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.824 [2024-10-13 04:08:18.748594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:25.824 [2024-10-13 04:08:18.748604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.618 ms 00:17:25.824 [2024-10-13 04:08:18.748611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.824 [2024-10-13 04:08:18.771601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.824 [2024-10-13 04:08:18.771641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:25.824 [2024-10-13 04:08:18.771651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.944 ms 00:17:25.824 [2024-10-13 04:08:18.771658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.824 [2024-10-13 04:08:18.794452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.824 [2024-10-13 04:08:18.794482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:25.824 [2024-10-13 04:08:18.794492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.738 ms 00:17:25.824 [2024-10-13 04:08:18.794498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.824 [2024-10-13 04:08:18.794530] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:25.824 [2024-10-13 04:08:18.794543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:25.824 [2024-10-13 04:08:18.794794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.794999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:25.825 [2024-10-13 04:08:18.795322] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:25.825 [2024-10-13 04:08:18.795329] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:17:25.825 [2024-10-13 04:08:18.795337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:25.825 [2024-10-13 04:08:18.795343] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:25.825 [2024-10-13 04:08:18.795350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:25.825 [2024-10-13 04:08:18.795358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:25.825 [2024-10-13 04:08:18.795364] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:25.825 [2024-10-13 04:08:18.795371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:25.825 [2024-10-13 04:08:18.795378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:25.825 [2024-10-13 04:08:18.795385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:25.825 [2024-10-13 04:08:18.795391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:25.825 [2024-10-13 04:08:18.795398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.825 [2024-10-13 04:08:18.795405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:25.825 [2024-10-13 04:08:18.795412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:17:25.825 [2024-10-13 04:08:18.795421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.825 [2024-10-13 04:08:18.807374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.825 [2024-10-13 04:08:18.807403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:25.825 [2024-10-13 04:08:18.807412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.936 ms 00:17:25.825 [2024-10-13 04:08:18.807419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.825 [2024-10-13 04:08:18.807789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.825 [2024-10-13 04:08:18.807798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:25.825 [2024-10-13 04:08:18.807811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:17:25.825 [2024-10-13 04:08:18.807818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.825 [2024-10-13 04:08:18.842804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.825 [2024-10-13 04:08:18.842835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:25.825 [2024-10-13 04:08:18.842845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.825 [2024-10-13 04:08:18.842852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.825 [2024-10-13 04:08:18.842914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.825 [2024-10-13 04:08:18.842922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:25.826 [2024-10-13 04:08:18.842934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.826 [2024-10-13 04:08:18.842942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.826 [2024-10-13 04:08:18.842979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.826 [2024-10-13 04:08:18.842987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:25.826 [2024-10-13 04:08:18.842995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.826 [2024-10-13 04:08:18.843002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.826 [2024-10-13 04:08:18.843023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.826 [2024-10-13 04:08:18.843030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:25.826 [2024-10-13 04:08:18.843038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.826 [2024-10-13 04:08:18.843047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.826 [2024-10-13 04:08:18.920256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:25.826 [2024-10-13 04:08:18.920290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:25.826 [2024-10-13 04:08:18.920301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:25.826 [2024-10-13 04:08:18.920308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.983658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.084 [2024-10-13 04:08:18.983698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.084 [2024-10-13 04:08:18.983713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.084 [2024-10-13 04:08:18.983721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.983768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.084 [2024-10-13 04:08:18.983777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.084 [2024-10-13 04:08:18.983785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.084 [2024-10-13 04:08:18.983792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.983819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.084 [2024-10-13 04:08:18.983827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.084 [2024-10-13 04:08:18.983835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.084 [2024-10-13 04:08:18.983842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.983926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.084 [2024-10-13 04:08:18.983936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.084 [2024-10-13 04:08:18.983944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.084 [2024-10-13 04:08:18.983951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.983980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.084 [2024-10-13 04:08:18.983989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:26.084 [2024-10-13 04:08:18.983996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.084 [2024-10-13 04:08:18.984003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.984056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.084 [2024-10-13 04:08:18.984065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.084 [2024-10-13 04:08:18.984073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.084 [2024-10-13 04:08:18.984080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.984121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:26.084 [2024-10-13 04:08:18.984131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.084 [2024-10-13 04:08:18.984139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:26.084 [2024-10-13 04:08:18.984146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.084 [2024-10-13 04:08:18.984273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.719 ms, result 0 00:17:26.651 00:17:26.651 00:17:26.651 04:08:19 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74340 00:17:26.651 04:08:19 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74340 00:17:26.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.651 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74340 ']' 00:17:26.651 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.651 04:08:19 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:26.651 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.651 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.651 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.651 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:26.651 [2024-10-13 04:08:19.738937] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:26.651 [2024-10-13 04:08:19.739060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74340 ] 00:17:26.910 [2024-10-13 04:08:19.889315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.910 [2024-10-13 04:08:19.986136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.476 04:08:20 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.477 04:08:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:27.477 04:08:20 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:27.735 [2024-10-13 04:08:20.786316] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.735 [2024-10-13 04:08:20.786376] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:27.995 [2024-10-13 04:08:20.961102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.995 [2024-10-13 04:08:20.961144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:27.996 [2024-10-13 04:08:20.961159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.996 [2024-10-13 04:08:20.961167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.963783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.963815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:27.996 [2024-10-13 04:08:20.963826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:17:27.996 [2024-10-13 04:08:20.963833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.963905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:27.996 [2024-10-13 04:08:20.964571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:27.996 [2024-10-13 04:08:20.964598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.964606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:27.996 [2024-10-13 04:08:20.964626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:17:27.996 [2024-10-13 04:08:20.964634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.966030] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:27.996 [2024-10-13 04:08:20.978638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.978677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:27.996 [2024-10-13 04:08:20.978689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.613 ms 00:17:27.996 [2024-10-13 04:08:20.978698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.978780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.978792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:27.996 [2024-10-13 04:08:20.978801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:27.996 [2024-10-13 04:08:20.978809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.983708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.983742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:27.996 [2024-10-13 04:08:20.983751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:17:27.996 [2024-10-13 04:08:20.983760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.983857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.983868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:27.996 [2024-10-13 04:08:20.983876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:27.996 [2024-10-13 04:08:20.983884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.983908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.983921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:27.996 [2024-10-13 04:08:20.983928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:27.996 [2024-10-13 04:08:20.983937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.983960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:27.996 [2024-10-13 04:08:20.987264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.987289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:27.996 [2024-10-13 04:08:20.987300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.308 ms 00:17:27.996 [2024-10-13 04:08:20.987307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.987342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.987350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:27.996 [2024-10-13 04:08:20.987360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:27.996 [2024-10-13 04:08:20.987367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.987387] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:27.996 [2024-10-13 04:08:20.987406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:27.996 [2024-10-13 04:08:20.987445] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:27.996 [2024-10-13 04:08:20.987459] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:27.996 [2024-10-13 04:08:20.987565] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:27.996 [2024-10-13 04:08:20.987575] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:27.996 [2024-10-13 04:08:20.987587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:27.996 [2024-10-13 04:08:20.987597] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:27.996 [2024-10-13 04:08:20.987610] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:27.996 [2024-10-13 04:08:20.987634] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:27.996 [2024-10-13 04:08:20.987643] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:27.996 [2024-10-13 04:08:20.987650] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:27.996 [2024-10-13 04:08:20.987661] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:27.996 [2024-10-13 04:08:20.987668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.987677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:27.996 [2024-10-13 04:08:20.987684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:27.996 [2024-10-13 04:08:20.987693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.987779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.996 [2024-10-13 04:08:20.987789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:27.996 [2024-10-13 04:08:20.987798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:27.996 [2024-10-13 04:08:20.987807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.996 [2024-10-13 04:08:20.987917] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:27.996 [2024-10-13 04:08:20.987928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:27.996 [2024-10-13 04:08:20.987936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.996 [2024-10-13 04:08:20.987945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.996 [2024-10-13 04:08:20.987952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:27.996 [2024-10-13 04:08:20.987960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:27.996 [2024-10-13 04:08:20.987967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:27.996 [2024-10-13 04:08:20.987978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:27.996 [2024-10-13 04:08:20.987984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:27.996 [2024-10-13 04:08:20.987993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.996 [2024-10-13 04:08:20.987999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:27.996 [2024-10-13 04:08:20.988007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:27.996 [2024-10-13 04:08:20.988013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:27.996 [2024-10-13 04:08:20.988021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:27.996 [2024-10-13 04:08:20.988035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:27.996 [2024-10-13 04:08:20.988043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:27.996 [2024-10-13 04:08:20.988059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:27.996 [2024-10-13 04:08:20.988066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:27.996 [2024-10-13 04:08:20.988086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.996 [2024-10-13 04:08:20.988101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:27.996 [2024-10-13 04:08:20.988111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.996 [2024-10-13 04:08:20.988126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:27.996 [2024-10-13 04:08:20.988132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.996 [2024-10-13 04:08:20.988147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:27.996 [2024-10-13 04:08:20.988154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:27.996 [2024-10-13 04:08:20.988170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:27.996 [2024-10-13 04:08:20.988176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.996 [2024-10-13 04:08:20.988191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:27.996 [2024-10-13 04:08:20.988198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:27.996 [2024-10-13 04:08:20.988204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:27.996 [2024-10-13 04:08:20.988212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:27.996 [2024-10-13 04:08:20.988218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:27.996 [2024-10-13 04:08:20.988227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:27.996 [2024-10-13 04:08:20.988242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:27.996 [2024-10-13 04:08:20.988249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.996 [2024-10-13 04:08:20.988257] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:27.996 [2024-10-13 04:08:20.988264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:27.996 [2024-10-13 04:08:20.988272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:27.996 [2024-10-13 04:08:20.988281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:27.997 [2024-10-13 04:08:20.988289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:27.997 [2024-10-13 04:08:20.988298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:27.997 [2024-10-13 04:08:20.988306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:27.997 [2024-10-13 04:08:20.988313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:27.997 [2024-10-13 04:08:20.988320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:27.997 [2024-10-13 04:08:20.988327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:27.997 [2024-10-13 04:08:20.988337] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:27.997 [2024-10-13 04:08:20.988346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.997 [2024-10-13 04:08:20.988358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:27.997 [2024-10-13 04:08:20.988366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:27.997 [2024-10-13 04:08:20.988374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:27.997 [2024-10-13 04:08:20.988382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:27.997 [2024-10-13 04:08:20.988391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:27.997 [2024-10-13 04:08:20.988398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:27.997 [2024-10-13 04:08:20.988407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:27.997 [2024-10-13 04:08:20.988413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:27.997 [2024-10-13 04:08:20.988422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:27.997 [2024-10-13 04:08:20.988430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:27.997 [2024-10-13 04:08:20.988438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:27.997 [2024-10-13 04:08:20.988445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:27.997 [2024-10-13 04:08:20.988453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:27.997 [2024-10-13 04:08:20.988460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:27.997 [2024-10-13 04:08:20.988469] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:27.997 [2024-10-13 04:08:20.988477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:27.997 [2024-10-13 04:08:20.988487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:27.997 [2024-10-13 04:08:20.988494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:27.997 [2024-10-13 04:08:20.988503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:27.997 [2024-10-13 04:08:20.988510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:27.997 [2024-10-13 04:08:20.988519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:20.988526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:27.997 [2024-10-13 04:08:20.988534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:17:27.997 [2024-10-13 04:08:20.988541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.014544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.014579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:27.997 [2024-10-13 04:08:21.014592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.948 ms 00:17:27.997 [2024-10-13 04:08:21.014599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.014730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.014742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:27.997 [2024-10-13 04:08:21.014752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:27.997 [2024-10-13 04:08:21.014759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.045059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.045186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:27.997 [2024-10-13 04:08:21.045207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.277 ms 00:17:27.997 [2024-10-13 04:08:21.045217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.045274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.045283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.997 [2024-10-13 04:08:21.045293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:27.997 [2024-10-13 04:08:21.045300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.045648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.045663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.997 [2024-10-13 04:08:21.045673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:17:27.997 [2024-10-13 04:08:21.045680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.045804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.045813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.997 [2024-10-13 04:08:21.045822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:27.997 [2024-10-13 04:08:21.045829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.060053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.060164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.997 [2024-10-13 04:08:21.060181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.202 ms 00:17:27.997 [2024-10-13 04:08:21.060189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.072848] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:27.997 [2024-10-13 04:08:21.072880] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:27.997 [2024-10-13 04:08:21.072892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.072900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:27.997 [2024-10-13 04:08:21.072910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.597 ms 00:17:27.997 [2024-10-13 04:08:21.072917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.097156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.097187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:27.997 [2024-10-13 04:08:21.097200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.170 ms 00:17:27.997 [2024-10-13 04:08:21.097207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.108877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.108906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:27.997 [2024-10-13 04:08:21.108919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.603 ms 00:17:27.997 [2024-10-13 04:08:21.108926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.120285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.120315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:27.997 [2024-10-13 04:08:21.120326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.295 ms 00:17:27.997 [2024-10-13 04:08:21.120334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.997 [2024-10-13 04:08:21.120949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.997 [2024-10-13 04:08:21.120966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:27.997 [2024-10-13 04:08:21.120977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:17:27.997 [2024-10-13 04:08:21.120985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.186083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.186135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:28.256 [2024-10-13 04:08:21.186151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.073 ms 00:17:28.256 [2024-10-13 04:08:21.186160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.196558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:28.256 [2024-10-13 04:08:21.210216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.210383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.256 [2024-10-13 04:08:21.210401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.968 ms 00:17:28.256 [2024-10-13 04:08:21.210410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.210483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.210495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:28.256 [2024-10-13 04:08:21.210504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:28.256 [2024-10-13 04:08:21.210513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.210561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.210572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.256 [2024-10-13 04:08:21.210580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:28.256 [2024-10-13 04:08:21.210589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.210637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.210650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.256 [2024-10-13 04:08:21.210658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:28.256 [2024-10-13 04:08:21.210669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.210698] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:28.256 [2024-10-13 04:08:21.210710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.210718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:28.256 [2024-10-13 04:08:21.210727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:28.256 [2024-10-13 04:08:21.210736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.234804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.234847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.256 [2024-10-13 04:08:21.234860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.044 ms 00:17:28.256 [2024-10-13 04:08:21.234868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.256 [2024-10-13 04:08:21.234956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.256 [2024-10-13 04:08:21.234966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.256 [2024-10-13 04:08:21.234978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:28.257 [2024-10-13 04:08:21.234985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.257 [2024-10-13 04:08:21.235775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.257 [2024-10-13 04:08:21.238678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.378 ms, result 0 00:17:28.257 [2024-10-13 04:08:21.240723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.257 Some configs were skipped because the RPC state that can call them passed over. 00:17:28.257 04:08:21 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:28.515 [2024-10-13 04:08:21.468949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.515 [2024-10-13 04:08:21.469088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:28.515 [2024-10-13 04:08:21.469162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:17:28.515 [2024-10-13 04:08:21.469189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.515 [2024-10-13 04:08:21.469238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.611 ms, result 0 00:17:28.515 true 00:17:28.515 04:08:21 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:28.515 [2024-10-13 04:08:21.668400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.515 [2024-10-13 04:08:21.668524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:28.515 [2024-10-13 04:08:21.668577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.536 ms 00:17:28.515 [2024-10-13 04:08:21.668599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.515 [2024-10-13 04:08:21.668668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.803 ms, result 0 00:17:28.515 true 00:17:28.773 04:08:21 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74340 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74340 ']' 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74340 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74340 00:17:28.773 killing process with pid 74340 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74340' 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74340 00:17:28.773 04:08:21 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74340 00:17:29.341 [2024-10-13 04:08:22.391941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.391993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.341 [2024-10-13 04:08:22.392006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.341 [2024-10-13 04:08:22.392015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.392044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:29.341 [2024-10-13 04:08:22.394634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.394662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.341 [2024-10-13 04:08:22.394678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:17:29.341 [2024-10-13 04:08:22.394686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.394986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.395000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.341 [2024-10-13 04:08:22.395010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:29.341 [2024-10-13 04:08:22.395017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.399025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.399052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.341 [2024-10-13 04:08:22.399062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.988 ms 00:17:29.341 [2024-10-13 04:08:22.399069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.406113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.406256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.341 [2024-10-13 04:08:22.406277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.007 ms 00:17:29.341 [2024-10-13 04:08:22.406284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.415334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.415364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.341 [2024-10-13 04:08:22.415378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.000 ms 00:17:29.341 [2024-10-13 04:08:22.415391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.422491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.422521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.341 [2024-10-13 04:08:22.422533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.062 ms 00:17:29.341 [2024-10-13 04:08:22.422543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.422690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.422701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.341 [2024-10-13 04:08:22.422711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:29.341 [2024-10-13 04:08:22.422718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.432394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.432423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:29.341 [2024-10-13 04:08:22.432435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.655 ms 00:17:29.341 [2024-10-13 04:08:22.432442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.441832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.441859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:29.341 [2024-10-13 04:08:22.441872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.353 ms 00:17:29.341 [2024-10-13 04:08:22.441879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.451001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.451030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.341 [2024-10-13 04:08:22.451041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.084 ms 00:17:29.341 [2024-10-13 04:08:22.451048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.460053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.341 [2024-10-13 04:08:22.460082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.341 [2024-10-13 04:08:22.460093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.943 ms 00:17:29.341 [2024-10-13 04:08:22.460100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.341 [2024-10-13 04:08:22.460147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.341 [2024-10-13 04:08:22.460160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.341 [2024-10-13 04:08:22.460358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.460997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.461006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.342 [2024-10-13 04:08:22.461021] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.342 [2024-10-13 04:08:22.461032] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:17:29.342 [2024-10-13 04:08:22.461044] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.342 [2024-10-13 04:08:22.461054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.342 [2024-10-13 04:08:22.461063] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.342 [2024-10-13 04:08:22.461072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.342 [2024-10-13 04:08:22.461079] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.342 [2024-10-13 04:08:22.461087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.342 [2024-10-13 04:08:22.461094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.342 [2024-10-13 04:08:22.461101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.342 [2024-10-13 04:08:22.461107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.342 [2024-10-13 04:08:22.461116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.342 [2024-10-13 04:08:22.461123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.342 [2024-10-13 04:08:22.461132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:17:29.342 [2024-10-13 04:08:22.461139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.342 [2024-10-13 04:08:22.473408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.342 [2024-10-13 04:08:22.473430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.342 [2024-10-13 04:08:22.473442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.250 ms 00:17:29.342 [2024-10-13 04:08:22.473451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.343 [2024-10-13 04:08:22.473840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.343 [2024-10-13 04:08:22.473858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.343 [2024-10-13 04:08:22.473868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:17:29.343 [2024-10-13 04:08:22.473875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.517256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.517285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.601 [2024-10-13 04:08:22.517297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.517305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.517401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.517410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.601 [2024-10-13 04:08:22.517420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.517427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.517469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.517478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.601 [2024-10-13 04:08:22.517488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.517495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.517514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.517522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.601 [2024-10-13 04:08:22.517531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.517538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.593478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.593517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.601 [2024-10-13 04:08:22.593529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.593537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.644973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.645011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.601 [2024-10-13 04:08:22.645021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.645028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.645099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.645108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.601 [2024-10-13 04:08:22.645118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.645124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.645148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.645154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.601 [2024-10-13 04:08:22.645162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.645168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.645237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.645244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.601 [2024-10-13 04:08:22.645253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.645259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.645287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.645294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:29.601 [2024-10-13 04:08:22.645301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.645306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.645337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.645344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.601 [2024-10-13 04:08:22.645354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.645360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.645394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.601 [2024-10-13 04:08:22.645401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.601 [2024-10-13 04:08:22.645408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.601 [2024-10-13 04:08:22.645414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.601 [2024-10-13 04:08:22.645516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 253.561 ms, result 0 00:17:30.168 04:08:23 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.168 [2024-10-13 04:08:23.213784] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:30.168 [2024-10-13 04:08:23.214113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74393 ] 00:17:30.426 [2024-10-13 04:08:23.362423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.426 [2024-10-13 04:08:23.438507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.686 [2024-10-13 04:08:23.646168] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.686 [2024-10-13 04:08:23.646209] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.686 [2024-10-13 04:08:23.794035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.794074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.686 [2024-10-13 04:08:23.794084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:30.686 [2024-10-13 04:08:23.794091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.796128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.796155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.686 [2024-10-13 04:08:23.796163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:17:30.686 [2024-10-13 04:08:23.796169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.796227] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.686 [2024-10-13 04:08:23.796740] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.686 [2024-10-13 04:08:23.796849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.796859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.686 [2024-10-13 04:08:23.796866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:17:30.686 [2024-10-13 04:08:23.796872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.797877] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:30.686 [2024-10-13 04:08:23.807522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.807551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:30.686 [2024-10-13 04:08:23.807560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.646 ms 00:17:30.686 [2024-10-13 04:08:23.807569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.807646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.807655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:30.686 [2024-10-13 04:08:23.807663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:30.686 [2024-10-13 04:08:23.807668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.812170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.812289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.686 [2024-10-13 04:08:23.812302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.471 ms 00:17:30.686 [2024-10-13 04:08:23.812308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.812390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.812398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.686 [2024-10-13 04:08:23.812405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:30.686 [2024-10-13 04:08:23.812411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.812427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.812433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.686 [2024-10-13 04:08:23.812439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.686 [2024-10-13 04:08:23.812446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.812463] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:30.686 [2024-10-13 04:08:23.815056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.815077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.686 [2024-10-13 04:08:23.815084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:17:30.686 [2024-10-13 04:08:23.815090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.815116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.815123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.686 [2024-10-13 04:08:23.815129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.686 [2024-10-13 04:08:23.815135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.686 [2024-10-13 04:08:23.815148] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:30.686 [2024-10-13 04:08:23.815162] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:30.686 [2024-10-13 04:08:23.815190] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:30.686 [2024-10-13 04:08:23.815203] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:30.686 [2024-10-13 04:08:23.815282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:30.686 [2024-10-13 04:08:23.815291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.686 [2024-10-13 04:08:23.815298] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:30.686 [2024-10-13 04:08:23.815306] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.686 [2024-10-13 04:08:23.815314] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.686 [2024-10-13 04:08:23.815320] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:30.686 [2024-10-13 04:08:23.815328] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.686 [2024-10-13 04:08:23.815333] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:30.686 [2024-10-13 04:08:23.815339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:30.686 [2024-10-13 04:08:23.815345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.686 [2024-10-13 04:08:23.815350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.686 [2024-10-13 04:08:23.815356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:30.686 [2024-10-13 04:08:23.815363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.687 [2024-10-13 04:08:23.815429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.687 [2024-10-13 04:08:23.815436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.687 [2024-10-13 04:08:23.815442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:30.687 [2024-10-13 04:08:23.815451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.687 [2024-10-13 04:08:23.815523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.687 [2024-10-13 04:08:23.815531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.687 [2024-10-13 04:08:23.815537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.687 [2024-10-13 04:08:23.815543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.687 [2024-10-13 04:08:23.815554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:30.687 [2024-10-13 04:08:23.815564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.687 [2024-10-13 04:08:23.815571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.687 [2024-10-13 04:08:23.815581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.687 [2024-10-13 04:08:23.815585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:30.687 [2024-10-13 04:08:23.815590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.687 [2024-10-13 04:08:23.815601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.687 [2024-10-13 04:08:23.815607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:30.687 [2024-10-13 04:08:23.815734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.687 [2024-10-13 04:08:23.815776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:30.687 [2024-10-13 04:08:23.815791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.687 [2024-10-13 04:08:23.815819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.687 [2024-10-13 04:08:23.815846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.687 [2024-10-13 04:08:23.815897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.687 [2024-10-13 04:08:23.815928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.687 [2024-10-13 04:08:23.815942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.687 [2024-10-13 04:08:23.815969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.687 [2024-10-13 04:08:23.815983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:30.687 [2024-10-13 04:08:23.815997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.687 [2024-10-13 04:08:23.816043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.687 [2024-10-13 04:08:23.816060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:30.687 [2024-10-13 04:08:23.816073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.687 [2024-10-13 04:08:23.816088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.687 [2024-10-13 04:08:23.816102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:30.687 [2024-10-13 04:08:23.816115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.687 [2024-10-13 04:08:23.816129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:30.687 [2024-10-13 04:08:23.816143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:30.687 [2024-10-13 04:08:23.816157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.687 [2024-10-13 04:08:23.816195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:30.687 [2024-10-13 04:08:23.816213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:30.687 [2024-10-13 04:08:23.816227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.687 [2024-10-13 04:08:23.816356] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.687 [2024-10-13 04:08:23.816374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.687 [2024-10-13 04:08:23.816390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.687 [2024-10-13 04:08:23.816404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.687 [2024-10-13 04:08:23.816419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.687 [2024-10-13 04:08:23.816433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.687 [2024-10-13 04:08:23.816447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.687 [2024-10-13 04:08:23.816461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.687 [2024-10-13 04:08:23.816475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.687 [2024-10-13 04:08:23.816512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.687 [2024-10-13 04:08:23.816530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.687 [2024-10-13 04:08:23.816559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.687 [2024-10-13 04:08:23.816582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:30.687 [2024-10-13 04:08:23.816604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:30.687 [2024-10-13 04:08:23.816675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:30.687 [2024-10-13 04:08:23.816701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:30.687 [2024-10-13 04:08:23.816723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:30.687 [2024-10-13 04:08:23.816745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:30.687 [2024-10-13 04:08:23.816767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:30.687 [2024-10-13 04:08:23.816813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:30.687 [2024-10-13 04:08:23.816858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:30.687 [2024-10-13 04:08:23.816882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:30.687 [2024-10-13 04:08:23.816974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:30.687 [2024-10-13 04:08:23.817000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:30.687 [2024-10-13 04:08:23.817022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:30.687 [2024-10-13 04:08:23.817078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:30.687 [2024-10-13 04:08:23.817102] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.687 [2024-10-13 04:08:23.817126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.687 [2024-10-13 04:08:23.817148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.687 [2024-10-13 04:08:23.817208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.687 [2024-10-13 04:08:23.817249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.687 [2024-10-13 04:08:23.817272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.687 [2024-10-13 04:08:23.817296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.687 [2024-10-13 04:08:23.817311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.687 [2024-10-13 04:08:23.817327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.824 ms 00:17:30.687 [2024-10-13 04:08:23.817478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.687 [2024-10-13 04:08:23.838633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.687 [2024-10-13 04:08:23.838727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.687 [2024-10-13 04:08:23.838770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.075 ms 00:17:30.687 [2024-10-13 04:08:23.838787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.687 [2024-10-13 04:08:23.838887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.687 [2024-10-13 04:08:23.838931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.687 [2024-10-13 04:08:23.838951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:30.687 [2024-10-13 04:08:23.838970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.946 [2024-10-13 04:08:23.878850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.946 [2024-10-13 04:08:23.878964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.946 [2024-10-13 04:08:23.879011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.834 ms 00:17:30.946 [2024-10-13 04:08:23.879029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.946 [2024-10-13 04:08:23.879100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.946 [2024-10-13 04:08:23.879122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.946 [2024-10-13 04:08:23.879138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:30.946 [2024-10-13 04:08:23.879153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.946 [2024-10-13 04:08:23.879456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.946 [2024-10-13 04:08:23.879495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.946 [2024-10-13 04:08:23.879512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:17:30.946 [2024-10-13 04:08:23.879526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.946 [2024-10-13 04:08:23.879650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.946 [2024-10-13 04:08:23.879679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.946 [2024-10-13 04:08:23.879695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:30.946 [2024-10-13 04:08:23.879709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.946 [2024-10-13 04:08:23.890589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.946 [2024-10-13 04:08:23.890696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.946 [2024-10-13 04:08:23.890734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.782 ms 00:17:30.946 [2024-10-13 04:08:23.890752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.946 [2024-10-13 04:08:23.900440] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:30.946 [2024-10-13 04:08:23.900546] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:30.946 [2024-10-13 04:08:23.900595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.946 [2024-10-13 04:08:23.900611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:30.946 [2024-10-13 04:08:23.900667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.739 ms 00:17:30.947 [2024-10-13 04:08:23.900709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:23.919157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:23.919277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:30.947 [2024-10-13 04:08:23.919322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.392 ms 00:17:30.947 [2024-10-13 04:08:23.919340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:23.928223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:23.928314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:30.947 [2024-10-13 04:08:23.928379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.815 ms 00:17:30.947 [2024-10-13 04:08:23.928395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:23.936878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:23.936966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:30.947 [2024-10-13 04:08:23.937009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.435 ms 00:17:30.947 [2024-10-13 04:08:23.937025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:23.937493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:23.937565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.947 [2024-10-13 04:08:23.937604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:17:30.947 [2024-10-13 04:08:23.937636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:23.980952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:23.981108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:30.947 [2024-10-13 04:08:23.981149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.284 ms 00:17:30.947 [2024-10-13 04:08:23.981166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:23.989025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.947 [2024-10-13 04:08:24.001016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:24.001127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.947 [2024-10-13 04:08:24.001167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.781 ms 00:17:30.947 [2024-10-13 04:08:24.001186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:24.001280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:24.001303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:30.947 [2024-10-13 04:08:24.001319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:30.947 [2024-10-13 04:08:24.001333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:24.001384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:24.001454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.947 [2024-10-13 04:08:24.001473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:30.947 [2024-10-13 04:08:24.001487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:24.001518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:24.001538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.947 [2024-10-13 04:08:24.001556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.947 [2024-10-13 04:08:24.001570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:24.001653] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:30.947 [2024-10-13 04:08:24.001676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:24.001690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:30.947 [2024-10-13 04:08:24.001706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:30.947 [2024-10-13 04:08:24.001720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:24.019605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:24.019712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.947 [2024-10-13 04:08:24.019750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.861 ms 00:17:30.947 [2024-10-13 04:08:24.019767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:24.019843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.947 [2024-10-13 04:08:24.020044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.947 [2024-10-13 04:08:24.020058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:30.947 [2024-10-13 04:08:24.020065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.947 [2024-10-13 04:08:24.020769] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.947 [2024-10-13 04:08:24.023079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.483 ms, result 0 00:17:30.947 [2024-10-13 04:08:24.023730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.947 [2024-10-13 04:08:24.038487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.321  [2024-10-13T04:08:26.416Z] Copying: 24/256 [MB] (24 MBps) [2024-10-13T04:08:27.350Z] Copying: 49/256 [MB] (24 MBps) [2024-10-13T04:08:28.285Z] Copying: 80/256 [MB] (30 MBps) [2024-10-13T04:08:29.219Z] Copying: 120/256 [MB] (40 MBps) [2024-10-13T04:08:30.154Z] Copying: 162/256 [MB] (42 MBps) [2024-10-13T04:08:31.088Z] Copying: 197/256 [MB] (34 MBps) [2024-10-13T04:08:32.024Z] Copying: 227/256 [MB] (29 MBps) [2024-10-13T04:08:32.282Z] Copying: 256/256 [MB] (average 32 MBps)[2024-10-13 04:08:32.230361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.122 [2024-10-13 04:08:32.244274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.122 [2024-10-13 04:08:32.244410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.122 [2024-10-13 04:08:32.244467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:39.122 [2024-10-13 04:08:32.244491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.122 [2024-10-13 04:08:32.244530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.122 [2024-10-13 04:08:32.247145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.122 [2024-10-13 04:08:32.247255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.122 [2024-10-13 04:08:32.247307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:17:39.122 [2024-10-13 04:08:32.247330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.122 [2024-10-13 04:08:32.247605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.122 [2024-10-13 04:08:32.247652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.122 [2024-10-13 04:08:32.247986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:39.122 [2024-10-13 04:08:32.248080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.122 [2024-10-13 04:08:32.252226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.122 [2024-10-13 04:08:32.252311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.122 [2024-10-13 04:08:32.252365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:17:39.122 [2024-10-13 04:08:32.252394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.122 [2024-10-13 04:08:32.259364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.122 [2024-10-13 04:08:32.259470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:39.122 [2024-10-13 04:08:32.259587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.938 ms 00:17:39.122 [2024-10-13 04:08:32.259626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.122 [2024-10-13 04:08:32.282667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.122 [2024-10-13 04:08:32.282782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.382 [2024-10-13 04:08:32.282830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.962 ms 00:17:39.382 [2024-10-13 04:08:32.282852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.382 [2024-10-13 04:08:32.296955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.382 [2024-10-13 04:08:32.297065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.382 [2024-10-13 04:08:32.297114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.068 ms 00:17:39.382 [2024-10-13 04:08:32.297141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.382 [2024-10-13 04:08:32.297549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.382 [2024-10-13 04:08:32.297652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.382 [2024-10-13 04:08:32.297681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:39.382 [2024-10-13 04:08:32.297777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.382 [2024-10-13 04:08:32.321887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.382 [2024-10-13 04:08:32.322010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:39.382 [2024-10-13 04:08:32.322061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.066 ms 00:17:39.382 [2024-10-13 04:08:32.322083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.382 [2024-10-13 04:08:32.345284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.382 [2024-10-13 04:08:32.345390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:39.382 [2024-10-13 04:08:32.345405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.176 ms 00:17:39.382 [2024-10-13 04:08:32.345412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.382 [2024-10-13 04:08:32.368132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.382 [2024-10-13 04:08:32.368266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.382 [2024-10-13 04:08:32.368281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.696 ms 00:17:39.382 [2024-10-13 04:08:32.368288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.382 [2024-10-13 04:08:32.391449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.382 [2024-10-13 04:08:32.391567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.382 [2024-10-13 04:08:32.391583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.110 ms 00:17:39.382 [2024-10-13 04:08:32.391590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.382 [2024-10-13 04:08:32.391611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.382 [2024-10-13 04:08:32.391634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.382 [2024-10-13 04:08:32.391807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.391995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.383 [2024-10-13 04:08:32.392405] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.383 [2024-10-13 04:08:32.392412] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44f1066e-b8f1-4a97-9db9-72a27b163b91 00:17:39.383 [2024-10-13 04:08:32.392420] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.383 [2024-10-13 04:08:32.392427] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.383 [2024-10-13 04:08:32.392434] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.383 [2024-10-13 04:08:32.392441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.383 [2024-10-13 04:08:32.392448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.383 [2024-10-13 04:08:32.392455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.383 [2024-10-13 04:08:32.392462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.383 [2024-10-13 04:08:32.392469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.383 [2024-10-13 04:08:32.392475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.383 [2024-10-13 04:08:32.392482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.383 [2024-10-13 04:08:32.392490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.383 [2024-10-13 04:08:32.392498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:17:39.383 [2024-10-13 04:08:32.392507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.383 [2024-10-13 04:08:32.405001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.383 [2024-10-13 04:08:32.405032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.383 [2024-10-13 04:08:32.405043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.464 ms 00:17:39.383 [2024-10-13 04:08:32.405050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.383 [2024-10-13 04:08:32.405399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.383 [2024-10-13 04:08:32.405408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.383 [2024-10-13 04:08:32.405420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:17:39.383 [2024-10-13 04:08:32.405427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.384 [2024-10-13 04:08:32.440012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.384 [2024-10-13 04:08:32.440051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.384 [2024-10-13 04:08:32.440062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.384 [2024-10-13 04:08:32.440070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.384 [2024-10-13 04:08:32.440158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.384 [2024-10-13 04:08:32.440167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.384 [2024-10-13 04:08:32.440178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.384 [2024-10-13 04:08:32.440185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.384 [2024-10-13 04:08:32.440223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.384 [2024-10-13 04:08:32.440232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.384 [2024-10-13 04:08:32.440240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.384 [2024-10-13 04:08:32.440247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.384 [2024-10-13 04:08:32.440266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.384 [2024-10-13 04:08:32.440274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.384 [2024-10-13 04:08:32.440281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.384 [2024-10-13 04:08:32.440290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.384 [2024-10-13 04:08:32.517450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.384 [2024-10-13 04:08:32.517487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.384 [2024-10-13 04:08:32.517499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.384 [2024-10-13 04:08:32.517506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.642 [2024-10-13 04:08:32.580759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.642 [2024-10-13 04:08:32.580801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.642 [2024-10-13 04:08:32.580817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.642 [2024-10-13 04:08:32.580825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.642 [2024-10-13 04:08:32.580897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.642 [2024-10-13 04:08:32.580906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.642 [2024-10-13 04:08:32.580914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.642 [2024-10-13 04:08:32.580921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.642 [2024-10-13 04:08:32.580948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.642 [2024-10-13 04:08:32.580957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.642 [2024-10-13 04:08:32.580964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.643 [2024-10-13 04:08:32.580971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.643 [2024-10-13 04:08:32.581060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.643 [2024-10-13 04:08:32.581069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.643 [2024-10-13 04:08:32.581077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.643 [2024-10-13 04:08:32.581085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.643 [2024-10-13 04:08:32.581114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.643 [2024-10-13 04:08:32.581123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.643 [2024-10-13 04:08:32.581130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.643 [2024-10-13 04:08:32.581137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.643 [2024-10-13 04:08:32.581178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.643 [2024-10-13 04:08:32.581188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.643 [2024-10-13 04:08:32.581195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.643 [2024-10-13 04:08:32.581203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.643 [2024-10-13 04:08:32.581245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.643 [2024-10-13 04:08:32.581254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.643 [2024-10-13 04:08:32.581262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.643 [2024-10-13 04:08:32.581270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.643 [2024-10-13 04:08:32.581400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.126 ms, result 0 00:17:40.209 00:17:40.209 00:17:40.209 04:08:33 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.776 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:40.776 04:08:33 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:40.776 04:08:33 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:40.776 04:08:33 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:40.776 04:08:33 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.776 04:08:33 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:40.776 04:08:33 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:40.776 04:08:33 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74340 00:17:40.776 04:08:33 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74340 ']' 00:17:40.776 04:08:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74340 00:17:40.776 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74340) - No such process 00:17:40.776 04:08:33 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74340 is not found' 00:17:40.776 Process with pid 74340 is not found 00:17:40.776 ************************************ 00:17:40.776 END TEST ftl_trim 00:17:40.776 ************************************ 00:17:40.776 00:17:40.776 real 1m0.090s 00:17:40.776 user 1m26.243s 00:17:40.776 sys 0m4.904s 00:17:40.776 04:08:33 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.776 04:08:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:41.035 04:08:33 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:41.035 04:08:33 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:41.035 04:08:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:41.035 04:08:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:41.035 ************************************ 00:17:41.035 START TEST ftl_restore 00:17:41.035 ************************************ 00:17:41.035 04:08:33 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:41.035 * Looking for test storage... 00:17:41.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.035 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:41.035 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:41.035 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:17:41.035 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.035 04:08:34 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:41.035 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.035 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:41.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.035 --rc genhtml_branch_coverage=1 00:17:41.035 --rc genhtml_function_coverage=1 00:17:41.035 --rc genhtml_legend=1 00:17:41.035 --rc geninfo_all_blocks=1 00:17:41.035 --rc geninfo_unexecuted_blocks=1 00:17:41.035 00:17:41.035 ' 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:41.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.036 --rc genhtml_branch_coverage=1 00:17:41.036 --rc genhtml_function_coverage=1 00:17:41.036 --rc genhtml_legend=1 00:17:41.036 --rc geninfo_all_blocks=1 00:17:41.036 --rc geninfo_unexecuted_blocks=1 00:17:41.036 00:17:41.036 ' 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:41.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.036 --rc genhtml_branch_coverage=1 00:17:41.036 --rc genhtml_function_coverage=1 00:17:41.036 --rc genhtml_legend=1 00:17:41.036 --rc geninfo_all_blocks=1 00:17:41.036 --rc geninfo_unexecuted_blocks=1 00:17:41.036 00:17:41.036 ' 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:41.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.036 --rc genhtml_branch_coverage=1 00:17:41.036 --rc genhtml_function_coverage=1 00:17:41.036 --rc genhtml_legend=1 00:17:41.036 --rc geninfo_all_blocks=1 00:17:41.036 --rc geninfo_unexecuted_blocks=1 00:17:41.036 00:17:41.036 ' 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.pEOk7GSFNq 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74569 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:41.036 04:08:34 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74569 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74569 ']' 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.036 04:08:34 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:41.295 [2024-10-13 04:08:34.224823] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:41.295 [2024-10-13 04:08:34.225111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74569 ] 00:17:41.295 [2024-10-13 04:08:34.376483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.553 [2024-10-13 04:08:34.471000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.120 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.120 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:42.120 04:08:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:42.120 04:08:35 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:42.120 04:08:35 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:42.121 04:08:35 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:42.121 04:08:35 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:42.121 04:08:35 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:42.379 04:08:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:42.379 04:08:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:42.379 04:08:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:42.379 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:42.379 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:42.379 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:42.379 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:42.379 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:42.638 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:42.638 { 00:17:42.638 "name": "nvme0n1", 00:17:42.638 "aliases": [ 00:17:42.638 "f8033cef-c1a4-45a0-b3ca-2ff60ddbeccd" 00:17:42.638 ], 00:17:42.638 "product_name": "NVMe disk", 00:17:42.638 "block_size": 4096, 00:17:42.638 "num_blocks": 1310720, 00:17:42.638 "uuid": "f8033cef-c1a4-45a0-b3ca-2ff60ddbeccd", 00:17:42.638 "numa_id": -1, 00:17:42.638 "assigned_rate_limits": { 00:17:42.638 "rw_ios_per_sec": 0, 00:17:42.638 "rw_mbytes_per_sec": 0, 00:17:42.638 "r_mbytes_per_sec": 0, 00:17:42.638 "w_mbytes_per_sec": 0 00:17:42.638 }, 00:17:42.638 "claimed": true, 00:17:42.638 "claim_type": "read_many_write_one", 00:17:42.638 "zoned": false, 00:17:42.638 "supported_io_types": { 00:17:42.638 "read": true, 00:17:42.638 "write": true, 00:17:42.638 "unmap": true, 00:17:42.638 "flush": true, 00:17:42.638 "reset": true, 00:17:42.638 "nvme_admin": true, 00:17:42.638 "nvme_io": true, 00:17:42.638 "nvme_io_md": false, 00:17:42.638 "write_zeroes": true, 00:17:42.638 "zcopy": false, 00:17:42.638 "get_zone_info": false, 00:17:42.638 "zone_management": false, 00:17:42.638 "zone_append": false, 00:17:42.638 "compare": true, 00:17:42.638 "compare_and_write": false, 00:17:42.638 "abort": true, 00:17:42.638 "seek_hole": false, 00:17:42.638 "seek_data": false, 00:17:42.638 "copy": true, 00:17:42.638 "nvme_iov_md": false 00:17:42.638 }, 00:17:42.638 "driver_specific": { 00:17:42.638 "nvme": [ 00:17:42.638 { 00:17:42.638 "pci_address": "0000:00:11.0", 00:17:42.638 "trid": { 00:17:42.638 "trtype": "PCIe", 00:17:42.638 "traddr": "0000:00:11.0" 00:17:42.638 }, 00:17:42.638 "ctrlr_data": { 00:17:42.638 "cntlid": 0, 00:17:42.638 "vendor_id": "0x1b36", 00:17:42.638 "model_number": "QEMU NVMe Ctrl", 00:17:42.638 "serial_number": "12341", 00:17:42.638 "firmware_revision": "8.0.0", 00:17:42.638 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:42.638 "oacs": { 00:17:42.638 "security": 0, 00:17:42.638 "format": 1, 00:17:42.638 "firmware": 0, 00:17:42.638 "ns_manage": 1 00:17:42.638 }, 00:17:42.638 "multi_ctrlr": false, 00:17:42.638 "ana_reporting": false 00:17:42.638 }, 00:17:42.638 "vs": { 00:17:42.638 "nvme_version": "1.4" 00:17:42.638 }, 00:17:42.638 "ns_data": { 00:17:42.638 "id": 1, 00:17:42.638 "can_share": false 00:17:42.638 } 00:17:42.638 } 00:17:42.638 ], 00:17:42.638 "mp_policy": "active_passive" 00:17:42.638 } 00:17:42.638 } 00:17:42.638 ]' 00:17:42.638 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:42.638 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:42.638 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:42.638 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:42.638 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:42.638 04:08:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:42.638 04:08:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:42.638 04:08:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:42.638 04:08:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:42.638 04:08:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:42.638 04:08:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:42.897 04:08:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e90c0766-aa15-4616-9f55-d311ab8d3c28 00:17:42.897 04:08:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:42.897 04:08:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e90c0766-aa15-4616-9f55-d311ab8d3c28 00:17:42.897 04:08:36 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:43.155 04:08:36 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b5d31cd4-58a1-4fcb-8e2b-0032a1565c9a 00:17:43.155 04:08:36 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b5d31cd4-58a1-4fcb-8e2b-0032a1565c9a 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:43.414 04:08:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.414 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.414 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:43.414 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:43.414 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:43.414 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.673 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:43.673 { 00:17:43.673 "name": "a2280ca0-ab2b-4d35-a833-646edf70bc44", 00:17:43.673 "aliases": [ 00:17:43.673 "lvs/nvme0n1p0" 00:17:43.673 ], 00:17:43.673 "product_name": "Logical Volume", 00:17:43.673 "block_size": 4096, 00:17:43.673 "num_blocks": 26476544, 00:17:43.673 "uuid": "a2280ca0-ab2b-4d35-a833-646edf70bc44", 00:17:43.673 "assigned_rate_limits": { 00:17:43.673 "rw_ios_per_sec": 0, 00:17:43.673 "rw_mbytes_per_sec": 0, 00:17:43.673 "r_mbytes_per_sec": 0, 00:17:43.673 "w_mbytes_per_sec": 0 00:17:43.673 }, 00:17:43.673 "claimed": false, 00:17:43.673 "zoned": false, 00:17:43.673 "supported_io_types": { 00:17:43.673 "read": true, 00:17:43.673 "write": true, 00:17:43.673 "unmap": true, 00:17:43.673 "flush": false, 00:17:43.673 "reset": true, 00:17:43.673 "nvme_admin": false, 00:17:43.673 "nvme_io": false, 00:17:43.673 "nvme_io_md": false, 00:17:43.673 "write_zeroes": true, 00:17:43.673 "zcopy": false, 00:17:43.673 "get_zone_info": false, 00:17:43.673 "zone_management": false, 00:17:43.673 "zone_append": false, 00:17:43.673 "compare": false, 00:17:43.673 "compare_and_write": false, 00:17:43.673 "abort": false, 00:17:43.673 "seek_hole": true, 00:17:43.673 "seek_data": true, 00:17:43.673 "copy": false, 00:17:43.673 "nvme_iov_md": false 00:17:43.673 }, 00:17:43.673 "driver_specific": { 00:17:43.673 "lvol": { 00:17:43.673 "lvol_store_uuid": "b5d31cd4-58a1-4fcb-8e2b-0032a1565c9a", 00:17:43.673 "base_bdev": "nvme0n1", 00:17:43.673 "thin_provision": true, 00:17:43.673 "num_allocated_clusters": 0, 00:17:43.673 "snapshot": false, 00:17:43.673 "clone": false, 00:17:43.673 "esnap_clone": false 00:17:43.673 } 00:17:43.673 } 00:17:43.673 } 00:17:43.673 ]' 00:17:43.673 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:43.673 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:43.673 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:43.673 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:43.673 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:43.673 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:43.673 04:08:36 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:43.673 04:08:36 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:43.673 04:08:36 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:43.932 04:08:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:43.932 04:08:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:43.932 04:08:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.932 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:43.932 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:43.932 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:43.932 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:43.932 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:44.191 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:44.191 { 00:17:44.191 "name": "a2280ca0-ab2b-4d35-a833-646edf70bc44", 00:17:44.191 "aliases": [ 00:17:44.191 "lvs/nvme0n1p0" 00:17:44.191 ], 00:17:44.191 "product_name": "Logical Volume", 00:17:44.191 "block_size": 4096, 00:17:44.191 "num_blocks": 26476544, 00:17:44.191 "uuid": "a2280ca0-ab2b-4d35-a833-646edf70bc44", 00:17:44.191 "assigned_rate_limits": { 00:17:44.191 "rw_ios_per_sec": 0, 00:17:44.191 "rw_mbytes_per_sec": 0, 00:17:44.191 "r_mbytes_per_sec": 0, 00:17:44.191 "w_mbytes_per_sec": 0 00:17:44.191 }, 00:17:44.191 "claimed": false, 00:17:44.191 "zoned": false, 00:17:44.191 "supported_io_types": { 00:17:44.191 "read": true, 00:17:44.191 "write": true, 00:17:44.191 "unmap": true, 00:17:44.191 "flush": false, 00:17:44.191 "reset": true, 00:17:44.191 "nvme_admin": false, 00:17:44.191 "nvme_io": false, 00:17:44.191 "nvme_io_md": false, 00:17:44.191 "write_zeroes": true, 00:17:44.191 "zcopy": false, 00:17:44.191 "get_zone_info": false, 00:17:44.191 "zone_management": false, 00:17:44.191 "zone_append": false, 00:17:44.191 "compare": false, 00:17:44.191 "compare_and_write": false, 00:17:44.191 "abort": false, 00:17:44.191 "seek_hole": true, 00:17:44.191 "seek_data": true, 00:17:44.191 "copy": false, 00:17:44.191 "nvme_iov_md": false 00:17:44.191 }, 00:17:44.191 "driver_specific": { 00:17:44.191 "lvol": { 00:17:44.191 "lvol_store_uuid": "b5d31cd4-58a1-4fcb-8e2b-0032a1565c9a", 00:17:44.191 "base_bdev": "nvme0n1", 00:17:44.191 "thin_provision": true, 00:17:44.191 "num_allocated_clusters": 0, 00:17:44.191 "snapshot": false, 00:17:44.191 "clone": false, 00:17:44.191 "esnap_clone": false 00:17:44.191 } 00:17:44.191 } 00:17:44.191 } 00:17:44.191 ]' 00:17:44.191 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:44.191 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:44.191 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:44.191 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:44.191 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:44.191 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:44.191 04:08:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:44.191 04:08:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:44.450 04:08:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:44.450 04:08:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:44.450 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:44.450 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:44.450 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:44.450 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:44.450 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a2280ca0-ab2b-4d35-a833-646edf70bc44 00:17:44.450 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:44.450 { 00:17:44.450 "name": "a2280ca0-ab2b-4d35-a833-646edf70bc44", 00:17:44.450 "aliases": [ 00:17:44.450 "lvs/nvme0n1p0" 00:17:44.450 ], 00:17:44.450 "product_name": "Logical Volume", 00:17:44.450 "block_size": 4096, 00:17:44.450 "num_blocks": 26476544, 00:17:44.450 "uuid": "a2280ca0-ab2b-4d35-a833-646edf70bc44", 00:17:44.450 "assigned_rate_limits": { 00:17:44.450 "rw_ios_per_sec": 0, 00:17:44.450 "rw_mbytes_per_sec": 0, 00:17:44.450 "r_mbytes_per_sec": 0, 00:17:44.450 "w_mbytes_per_sec": 0 00:17:44.450 }, 00:17:44.450 "claimed": false, 00:17:44.450 "zoned": false, 00:17:44.450 "supported_io_types": { 00:17:44.450 "read": true, 00:17:44.450 "write": true, 00:17:44.450 "unmap": true, 00:17:44.450 "flush": false, 00:17:44.450 "reset": true, 00:17:44.450 "nvme_admin": false, 00:17:44.450 "nvme_io": false, 00:17:44.450 "nvme_io_md": false, 00:17:44.450 "write_zeroes": true, 00:17:44.450 "zcopy": false, 00:17:44.450 "get_zone_info": false, 00:17:44.450 "zone_management": false, 00:17:44.450 "zone_append": false, 00:17:44.450 "compare": false, 00:17:44.450 "compare_and_write": false, 00:17:44.450 "abort": false, 00:17:44.450 "seek_hole": true, 00:17:44.450 "seek_data": true, 00:17:44.450 "copy": false, 00:17:44.450 "nvme_iov_md": false 00:17:44.450 }, 00:17:44.450 "driver_specific": { 00:17:44.450 "lvol": { 00:17:44.450 "lvol_store_uuid": "b5d31cd4-58a1-4fcb-8e2b-0032a1565c9a", 00:17:44.450 "base_bdev": "nvme0n1", 00:17:44.450 "thin_provision": true, 00:17:44.450 "num_allocated_clusters": 0, 00:17:44.450 "snapshot": false, 00:17:44.450 "clone": false, 00:17:44.450 "esnap_clone": false 00:17:44.450 } 00:17:44.450 } 00:17:44.450 } 00:17:44.450 ]' 00:17:44.450 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:44.710 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:44.710 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:44.710 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:44.710 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:44.710 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:44.710 04:08:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:44.710 04:08:37 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a2280ca0-ab2b-4d35-a833-646edf70bc44 --l2p_dram_limit 10' 00:17:44.710 04:08:37 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:44.710 04:08:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:44.710 04:08:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:44.710 04:08:37 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:44.710 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:44.710 04:08:37 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a2280ca0-ab2b-4d35-a833-646edf70bc44 --l2p_dram_limit 10 -c nvc0n1p0 00:17:44.710 [2024-10-13 04:08:37.852990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.853035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.710 [2024-10-13 04:08:37.853049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.710 [2024-10-13 04:08:37.853056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.853101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.853110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.710 [2024-10-13 04:08:37.853118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:44.710 [2024-10-13 04:08:37.853136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.853156] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.710 [2024-10-13 04:08:37.853791] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.710 [2024-10-13 04:08:37.853808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.853814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.710 [2024-10-13 04:08:37.853822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:17:44.710 [2024-10-13 04:08:37.853828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.853855] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 62a628b0-e563-49b5-b153-de474dcf74ca 00:17:44.710 [2024-10-13 04:08:37.854829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.854859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:44.710 [2024-10-13 04:08:37.854868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:44.710 [2024-10-13 04:08:37.854877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.859691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.859719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.710 [2024-10-13 04:08:37.859726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:17:44.710 [2024-10-13 04:08:37.859734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.859800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.859809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.710 [2024-10-13 04:08:37.859815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:44.710 [2024-10-13 04:08:37.859824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.859864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.859873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.710 [2024-10-13 04:08:37.859879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.710 [2024-10-13 04:08:37.859886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.859904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.710 [2024-10-13 04:08:37.862817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.862840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.710 [2024-10-13 04:08:37.862850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:17:44.710 [2024-10-13 04:08:37.862858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.862885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.710 [2024-10-13 04:08:37.862891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.710 [2024-10-13 04:08:37.862898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:44.710 [2024-10-13 04:08:37.862904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.710 [2024-10-13 04:08:37.862918] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:44.710 [2024-10-13 04:08:37.863024] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.710 [2024-10-13 04:08:37.863035] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.710 [2024-10-13 04:08:37.863044] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.710 [2024-10-13 04:08:37.863053] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.710 [2024-10-13 04:08:37.863060] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.710 [2024-10-13 04:08:37.863067] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.710 [2024-10-13 04:08:37.863072] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.710 [2024-10-13 04:08:37.863079] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.710 [2024-10-13 04:08:37.863085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.710 [2024-10-13 04:08:37.863092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.711 [2024-10-13 04:08:37.863099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.711 [2024-10-13 04:08:37.863107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:17:44.711 [2024-10-13 04:08:37.863117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.711 [2024-10-13 04:08:37.863181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.711 [2024-10-13 04:08:37.863188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.711 [2024-10-13 04:08:37.863195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:44.711 [2024-10-13 04:08:37.863201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.711 [2024-10-13 04:08:37.863274] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.711 [2024-10-13 04:08:37.863281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.711 [2024-10-13 04:08:37.863290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.711 [2024-10-13 04:08:37.863308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.711 [2024-10-13 04:08:37.863326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.711 [2024-10-13 04:08:37.863338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.711 [2024-10-13 04:08:37.863344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.711 [2024-10-13 04:08:37.863350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.711 [2024-10-13 04:08:37.863355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.711 [2024-10-13 04:08:37.863362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:44.711 [2024-10-13 04:08:37.863368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.711 [2024-10-13 04:08:37.863381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.711 [2024-10-13 04:08:37.863400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.711 [2024-10-13 04:08:37.863416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.711 [2024-10-13 04:08:37.863433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.711 [2024-10-13 04:08:37.863449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.711 [2024-10-13 04:08:37.863468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.711 [2024-10-13 04:08:37.863479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.711 [2024-10-13 04:08:37.863484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:44.711 [2024-10-13 04:08:37.863490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.711 [2024-10-13 04:08:37.863495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.711 [2024-10-13 04:08:37.863501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:44.711 [2024-10-13 04:08:37.863506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.711 [2024-10-13 04:08:37.863517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:44.711 [2024-10-13 04:08:37.863523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863528] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.711 [2024-10-13 04:08:37.863535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.711 [2024-10-13 04:08:37.863541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.711 [2024-10-13 04:08:37.863555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.711 [2024-10-13 04:08:37.863563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.711 [2024-10-13 04:08:37.863568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.711 [2024-10-13 04:08:37.863575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.711 [2024-10-13 04:08:37.863580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.711 [2024-10-13 04:08:37.863586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.711 [2024-10-13 04:08:37.863594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.711 [2024-10-13 04:08:37.863603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.711 [2024-10-13 04:08:37.863609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.711 [2024-10-13 04:08:37.863630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:44.711 [2024-10-13 04:08:37.863636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:44.711 [2024-10-13 04:08:37.863643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:44.711 [2024-10-13 04:08:37.863649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:44.711 [2024-10-13 04:08:37.863656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:44.711 [2024-10-13 04:08:37.863661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:44.711 [2024-10-13 04:08:37.863668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:44.711 [2024-10-13 04:08:37.863674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:44.711 [2024-10-13 04:08:37.863682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:44.711 [2024-10-13 04:08:37.863687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:44.711 [2024-10-13 04:08:37.863693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:44.711 [2024-10-13 04:08:37.863699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:44.711 [2024-10-13 04:08:37.863705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:44.711 [2024-10-13 04:08:37.863711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.711 [2024-10-13 04:08:37.863719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.711 [2024-10-13 04:08:37.863726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.711 [2024-10-13 04:08:37.863734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.711 [2024-10-13 04:08:37.863740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.711 [2024-10-13 04:08:37.863746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.711 [2024-10-13 04:08:37.863752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.711 [2024-10-13 04:08:37.863759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.711 [2024-10-13 04:08:37.863765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:17:44.711 [2024-10-13 04:08:37.863772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.711 [2024-10-13 04:08:37.863813] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:44.711 [2024-10-13 04:08:37.863824] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:47.255 [2024-10-13 04:08:39.921054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.255 [2024-10-13 04:08:39.921119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:47.255 [2024-10-13 04:08:39.921134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2057.228 ms 00:17:47.255 [2024-10-13 04:08:39.921145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.255 [2024-10-13 04:08:39.946675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.255 [2024-10-13 04:08:39.946720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.255 [2024-10-13 04:08:39.946732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.325 ms 00:17:47.255 [2024-10-13 04:08:39.946741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.255 [2024-10-13 04:08:39.946867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.255 [2024-10-13 04:08:39.946879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:47.255 [2024-10-13 04:08:39.946888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:47.255 [2024-10-13 04:08:39.946900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:39.977178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:39.977217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.256 [2024-10-13 04:08:39.977227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.246 ms 00:17:47.256 [2024-10-13 04:08:39.977237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:39.977265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:39.977275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.256 [2024-10-13 04:08:39.977283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:47.256 [2024-10-13 04:08:39.977294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:39.977665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:39.977683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.256 [2024-10-13 04:08:39.977693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:47.256 [2024-10-13 04:08:39.977702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:39.977806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:39.977816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.256 [2024-10-13 04:08:39.977824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:47.256 [2024-10-13 04:08:39.977835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:39.991705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:39.991872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.256 [2024-10-13 04:08:39.991889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.850 ms 00:17:47.256 [2024-10-13 04:08:39.991900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.003130] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:47.256 [2024-10-13 04:08:40.005983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.006012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:47.256 [2024-10-13 04:08:40.006026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.008 ms 00:17:47.256 [2024-10-13 04:08:40.006034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.073423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.073485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:47.256 [2024-10-13 04:08:40.073501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.357 ms 00:17:47.256 [2024-10-13 04:08:40.073510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.073706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.073718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:47.256 [2024-10-13 04:08:40.073731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:17:47.256 [2024-10-13 04:08:40.073742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.096487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.096672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:47.256 [2024-10-13 04:08:40.096694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.695 ms 00:17:47.256 [2024-10-13 04:08:40.096703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.118685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.118719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:47.256 [2024-10-13 04:08:40.118732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.946 ms 00:17:47.256 [2024-10-13 04:08:40.118739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.119298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.119312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:47.256 [2024-10-13 04:08:40.119322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:17:47.256 [2024-10-13 04:08:40.119330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.186136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.186184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:47.256 [2024-10-13 04:08:40.186203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.771 ms 00:17:47.256 [2024-10-13 04:08:40.186212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.209758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.209794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:47.256 [2024-10-13 04:08:40.209809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.472 ms 00:17:47.256 [2024-10-13 04:08:40.209818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.232253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.232391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:47.256 [2024-10-13 04:08:40.232411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.398 ms 00:17:47.256 [2024-10-13 04:08:40.232418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.255292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.255407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:47.256 [2024-10-13 04:08:40.255426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.839 ms 00:17:47.256 [2024-10-13 04:08:40.255433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.255469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.255478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:47.256 [2024-10-13 04:08:40.255490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:47.256 [2024-10-13 04:08:40.255498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.255575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.256 [2024-10-13 04:08:40.255584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:47.256 [2024-10-13 04:08:40.255595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:47.256 [2024-10-13 04:08:40.255602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.256 [2024-10-13 04:08:40.256465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2403.023 ms, result 0 00:17:47.256 { 00:17:47.256 "name": "ftl0", 00:17:47.256 "uuid": "62a628b0-e563-49b5-b153-de474dcf74ca" 00:17:47.256 } 00:17:47.256 04:08:40 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:47.256 04:08:40 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:47.568 04:08:40 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:47.568 04:08:40 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:47.568 [2024-10-13 04:08:40.668133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.568 [2024-10-13 04:08:40.668179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:47.568 [2024-10-13 04:08:40.668191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:47.568 [2024-10-13 04:08:40.668208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.568 [2024-10-13 04:08:40.668232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:47.568 [2024-10-13 04:08:40.670836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.568 [2024-10-13 04:08:40.670864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:47.568 [2024-10-13 04:08:40.670877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:17:47.568 [2024-10-13 04:08:40.670885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.568 [2024-10-13 04:08:40.671142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.568 [2024-10-13 04:08:40.671151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:47.568 [2024-10-13 04:08:40.671162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:17:47.568 [2024-10-13 04:08:40.671169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.568 [2024-10-13 04:08:40.674420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.568 [2024-10-13 04:08:40.674529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:47.568 [2024-10-13 04:08:40.674548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.232 ms 00:17:47.568 [2024-10-13 04:08:40.674555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.568 [2024-10-13 04:08:40.680764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.568 [2024-10-13 04:08:40.680859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:47.568 [2024-10-13 04:08:40.680878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:17:47.568 [2024-10-13 04:08:40.680886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.704669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.828 [2024-10-13 04:08:40.704701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:47.828 [2024-10-13 04:08:40.704713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.712 ms 00:17:47.828 [2024-10-13 04:08:40.704721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.719889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.828 [2024-10-13 04:08:40.720005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:47.828 [2024-10-13 04:08:40.720026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.128 ms 00:17:47.828 [2024-10-13 04:08:40.720034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.720195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.828 [2024-10-13 04:08:40.720205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:47.828 [2024-10-13 04:08:40.720215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:47.828 [2024-10-13 04:08:40.720223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.743420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.828 [2024-10-13 04:08:40.743528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:47.828 [2024-10-13 04:08:40.743545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.179 ms 00:17:47.828 [2024-10-13 04:08:40.743552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.766888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.828 [2024-10-13 04:08:40.766988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:47.828 [2024-10-13 04:08:40.767039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.306 ms 00:17:47.828 [2024-10-13 04:08:40.767061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.789796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.828 [2024-10-13 04:08:40.789904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:47.828 [2024-10-13 04:08:40.789954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.640 ms 00:17:47.828 [2024-10-13 04:08:40.789975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.812374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.828 [2024-10-13 04:08:40.812473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:47.828 [2024-10-13 04:08:40.812522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.268 ms 00:17:47.828 [2024-10-13 04:08:40.812544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.828 [2024-10-13 04:08:40.812642] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:47.828 [2024-10-13 04:08:40.812672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.812987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.813973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.814040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.814073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.814105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:47.828 [2024-10-13 04:08:40.814134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.814613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.814815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.814998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.815257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.815371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.815541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.815799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.815897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.815992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.816975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:47.829 [2024-10-13 04:08:40.817907] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:47.829 [2024-10-13 04:08:40.817936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a628b0-e563-49b5-b153-de474dcf74ca 00:17:47.829 [2024-10-13 04:08:40.817960] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:47.829 [2024-10-13 04:08:40.817989] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:47.829 [2024-10-13 04:08:40.818015] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:47.829 [2024-10-13 04:08:40.818042] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:47.829 [2024-10-13 04:08:40.818062] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:47.829 [2024-10-13 04:08:40.818094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:47.829 [2024-10-13 04:08:40.818116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:47.829 [2024-10-13 04:08:40.818139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:47.829 [2024-10-13 04:08:40.818159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:47.829 [2024-10-13 04:08:40.818188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.829 [2024-10-13 04:08:40.818213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:47.829 [2024-10-13 04:08:40.818244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.543 ms 00:17:47.829 [2024-10-13 04:08:40.818266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.829 [2024-10-13 04:08:40.837277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.829 [2024-10-13 04:08:40.837308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:47.829 [2024-10-13 04:08:40.837320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.884 ms 00:17:47.829 [2024-10-13 04:08:40.837328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.829 [2024-10-13 04:08:40.837726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.829 [2024-10-13 04:08:40.837742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:47.829 [2024-10-13 04:08:40.837753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:17:47.829 [2024-10-13 04:08:40.837760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.829 [2024-10-13 04:08:40.879183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.829 [2024-10-13 04:08:40.879216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.829 [2024-10-13 04:08:40.879227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.829 [2024-10-13 04:08:40.879235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.829 [2024-10-13 04:08:40.879286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.829 [2024-10-13 04:08:40.879294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.829 [2024-10-13 04:08:40.879303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.829 [2024-10-13 04:08:40.879310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.829 [2024-10-13 04:08:40.879375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.829 [2024-10-13 04:08:40.879384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.829 [2024-10-13 04:08:40.879394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.829 [2024-10-13 04:08:40.879401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.829 [2024-10-13 04:08:40.879421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.829 [2024-10-13 04:08:40.879429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.829 [2024-10-13 04:08:40.879438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.829 [2024-10-13 04:08:40.879444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.829 [2024-10-13 04:08:40.955050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.829 [2024-10-13 04:08:40.955089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.829 [2024-10-13 04:08:40.955101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.829 [2024-10-13 04:08:40.955109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.088 [2024-10-13 04:08:41.017287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.088 [2024-10-13 04:08:41.017306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.088 [2024-10-13 04:08:41.017314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.088 [2024-10-13 04:08:41.017407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.088 [2024-10-13 04:08:41.017416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.088 [2024-10-13 04:08:41.017423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.088 [2024-10-13 04:08:41.017479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.088 [2024-10-13 04:08:41.017488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.088 [2024-10-13 04:08:41.017496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.088 [2024-10-13 04:08:41.017591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.088 [2024-10-13 04:08:41.017602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.088 [2024-10-13 04:08:41.017609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.088 [2024-10-13 04:08:41.017683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:48.088 [2024-10-13 04:08:41.017693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.088 [2024-10-13 04:08:41.017700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.088 [2024-10-13 04:08:41.017747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.088 [2024-10-13 04:08:41.017757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.088 [2024-10-13 04:08:41.017766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.088 [2024-10-13 04:08:41.017821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.088 [2024-10-13 04:08:41.017831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.088 [2024-10-13 04:08:41.017838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.088 [2024-10-13 04:08:41.017963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 349.798 ms, result 0 00:17:48.088 true 00:17:48.088 04:08:41 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74569 00:17:48.088 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74569 ']' 00:17:48.088 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74569 00:17:48.088 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:48.089 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.089 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74569 00:17:48.089 killing process with pid 74569 00:17:48.089 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:48.089 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:48.089 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74569' 00:17:48.089 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74569 00:17:48.089 04:08:41 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74569 00:17:54.649 04:08:47 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:58.834 262144+0 records in 00:17:58.834 262144+0 records out 00:17:58.834 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.5498 s, 302 MB/s 00:17:58.834 04:08:51 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:59.769 04:08:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:59.769 [2024-10-13 04:08:52.918678] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:17:59.769 [2024-10-13 04:08:52.918878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74777 ] 00:18:00.028 [2024-10-13 04:08:53.064516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.028 [2024-10-13 04:08:53.159714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.286 [2024-10-13 04:08:53.410812] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:00.286 [2024-10-13 04:08:53.411007] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:00.547 [2024-10-13 04:08:53.568058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.547 [2024-10-13 04:08:53.568108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.547 [2024-10-13 04:08:53.568121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:00.547 [2024-10-13 04:08:53.568132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.547 [2024-10-13 04:08:53.568174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.547 [2024-10-13 04:08:53.568184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.547 [2024-10-13 04:08:53.568192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:00.547 [2024-10-13 04:08:53.568201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.547 [2024-10-13 04:08:53.568219] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.547 [2024-10-13 04:08:53.568917] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.547 [2024-10-13 04:08:53.568938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.547 [2024-10-13 04:08:53.568948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.547 [2024-10-13 04:08:53.568957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:18:00.547 [2024-10-13 04:08:53.568963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.547 [2024-10-13 04:08:53.570082] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:00.547 [2024-10-13 04:08:53.582766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.547 [2024-10-13 04:08:53.582808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:00.547 [2024-10-13 04:08:53.582820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.686 ms 00:18:00.547 [2024-10-13 04:08:53.582828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.547 [2024-10-13 04:08:53.582878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.547 [2024-10-13 04:08:53.582887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:00.547 [2024-10-13 04:08:53.582897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:00.547 [2024-10-13 04:08:53.582904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.547 [2024-10-13 04:08:53.587976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.547 [2024-10-13 04:08:53.588005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.547 [2024-10-13 04:08:53.588014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:18:00.547 [2024-10-13 04:08:53.588021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.547 [2024-10-13 04:08:53.588097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.548 [2024-10-13 04:08:53.588105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.548 [2024-10-13 04:08:53.588113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:00.548 [2024-10-13 04:08:53.588120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.548 [2024-10-13 04:08:53.588165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.548 [2024-10-13 04:08:53.588174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.548 [2024-10-13 04:08:53.588182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:00.548 [2024-10-13 04:08:53.588189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.548 [2024-10-13 04:08:53.588208] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.548 [2024-10-13 04:08:53.591599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.548 [2024-10-13 04:08:53.591638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.548 [2024-10-13 04:08:53.591648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:18:00.548 [2024-10-13 04:08:53.591655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.548 [2024-10-13 04:08:53.591684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.548 [2024-10-13 04:08:53.591692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.548 [2024-10-13 04:08:53.591700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:00.548 [2024-10-13 04:08:53.591707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.548 [2024-10-13 04:08:53.591725] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:00.548 [2024-10-13 04:08:53.591742] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:00.548 [2024-10-13 04:08:53.591774] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:00.548 [2024-10-13 04:08:53.591791] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:00.548 [2024-10-13 04:08:53.591892] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.548 [2024-10-13 04:08:53.591907] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.548 [2024-10-13 04:08:53.591917] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:00.548 [2024-10-13 04:08:53.591926] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.548 [2024-10-13 04:08:53.591934] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.548 [2024-10-13 04:08:53.591942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:00.548 [2024-10-13 04:08:53.591949] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.548 [2024-10-13 04:08:53.591957] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.548 [2024-10-13 04:08:53.591964] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.548 [2024-10-13 04:08:53.591971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.548 [2024-10-13 04:08:53.591981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.548 [2024-10-13 04:08:53.591988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:00.548 [2024-10-13 04:08:53.591994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.548 [2024-10-13 04:08:53.592084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.548 [2024-10-13 04:08:53.592092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.548 [2024-10-13 04:08:53.592100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:00.548 [2024-10-13 04:08:53.592106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.548 [2024-10-13 04:08:53.592217] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.548 [2024-10-13 04:08:53.592359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.548 [2024-10-13 04:08:53.592376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.548 [2024-10-13 04:08:53.592398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.548 [2024-10-13 04:08:53.592419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.548 [2024-10-13 04:08:53.592432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.548 [2024-10-13 04:08:53.592439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:00.548 [2024-10-13 04:08:53.592445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.548 [2024-10-13 04:08:53.592452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.548 [2024-10-13 04:08:53.592459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:00.548 [2024-10-13 04:08:53.592471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.548 [2024-10-13 04:08:53.592484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.548 [2024-10-13 04:08:53.592504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.548 [2024-10-13 04:08:53.592523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.548 [2024-10-13 04:08:53.592541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.548 [2024-10-13 04:08:53.592560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.548 [2024-10-13 04:08:53.592579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.548 [2024-10-13 04:08:53.592591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.548 [2024-10-13 04:08:53.592597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:00.548 [2024-10-13 04:08:53.592604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.548 [2024-10-13 04:08:53.592610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.548 [2024-10-13 04:08:53.592633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:00.548 [2024-10-13 04:08:53.592640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.548 [2024-10-13 04:08:53.592653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:00.548 [2024-10-13 04:08:53.592659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592666] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.548 [2024-10-13 04:08:53.592673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.548 [2024-10-13 04:08:53.592680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.548 [2024-10-13 04:08:53.592694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.548 [2024-10-13 04:08:53.592702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.548 [2024-10-13 04:08:53.592709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.548 [2024-10-13 04:08:53.592716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.548 [2024-10-13 04:08:53.592722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.548 [2024-10-13 04:08:53.592728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.548 [2024-10-13 04:08:53.592737] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.548 [2024-10-13 04:08:53.592746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.548 [2024-10-13 04:08:53.592754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:00.548 [2024-10-13 04:08:53.592761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:00.548 [2024-10-13 04:08:53.592768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:00.548 [2024-10-13 04:08:53.592775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:00.548 [2024-10-13 04:08:53.592782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:00.548 [2024-10-13 04:08:53.592789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:00.548 [2024-10-13 04:08:53.592796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:00.548 [2024-10-13 04:08:53.592803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:00.548 [2024-10-13 04:08:53.592810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:00.548 [2024-10-13 04:08:53.592818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:00.548 [2024-10-13 04:08:53.592824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:00.548 [2024-10-13 04:08:53.592831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:00.548 [2024-10-13 04:08:53.592838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:00.548 [2024-10-13 04:08:53.592846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:00.549 [2024-10-13 04:08:53.592853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.549 [2024-10-13 04:08:53.592861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.549 [2024-10-13 04:08:53.592871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.549 [2024-10-13 04:08:53.592878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.549 [2024-10-13 04:08:53.592885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.549 [2024-10-13 04:08:53.592892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.549 [2024-10-13 04:08:53.592900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.592907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.549 [2024-10-13 04:08:53.592915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:18:00.549 [2024-10-13 04:08:53.592921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.618953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.619073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.549 [2024-10-13 04:08:53.619131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.986 ms 00:18:00.549 [2024-10-13 04:08:53.619153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.619247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.619272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:00.549 [2024-10-13 04:08:53.619292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:00.549 [2024-10-13 04:08:53.619310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.658133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.658273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.549 [2024-10-13 04:08:53.658336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.764 ms 00:18:00.549 [2024-10-13 04:08:53.658362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.658411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.658435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.549 [2024-10-13 04:08:53.658455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:00.549 [2024-10-13 04:08:53.658473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.659217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.659342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.549 [2024-10-13 04:08:53.659408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:18:00.549 [2024-10-13 04:08:53.659431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.659576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.659607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.549 [2024-10-13 04:08:53.659669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:18:00.549 [2024-10-13 04:08:53.659691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.672719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.672821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.549 [2024-10-13 04:08:53.672902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.997 ms 00:18:00.549 [2024-10-13 04:08:53.672924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.549 [2024-10-13 04:08:53.685183] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:00.549 [2024-10-13 04:08:53.685318] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:00.549 [2024-10-13 04:08:53.685379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.549 [2024-10-13 04:08:53.685400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:00.549 [2024-10-13 04:08:53.685419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.335 ms 00:18:00.549 [2024-10-13 04:08:53.685782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.807 [2024-10-13 04:08:53.709840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.709959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:00.808 [2024-10-13 04:08:53.710011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.919 ms 00:18:00.808 [2024-10-13 04:08:53.710044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.721481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.721594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:00.808 [2024-10-13 04:08:53.721659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.394 ms 00:18:00.808 [2024-10-13 04:08:53.721681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.732843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.732947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:00.808 [2024-10-13 04:08:53.732995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.121 ms 00:18:00.808 [2024-10-13 04:08:53.733015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.733632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.733720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:00.808 [2024-10-13 04:08:53.733770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:18:00.808 [2024-10-13 04:08:53.733791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.788270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.788427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:00.808 [2024-10-13 04:08:53.788484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.448 ms 00:18:00.808 [2024-10-13 04:08:53.788507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.798719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:00.808 [2024-10-13 04:08:53.801160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.801257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:00.808 [2024-10-13 04:08:53.801312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.503 ms 00:18:00.808 [2024-10-13 04:08:53.801336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.801440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.801472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:00.808 [2024-10-13 04:08:53.801497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:00.808 [2024-10-13 04:08:53.801550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.801656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.801791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:00.808 [2024-10-13 04:08:53.801827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:00.808 [2024-10-13 04:08:53.801846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.801887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.802177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:00.808 [2024-10-13 04:08:53.802201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:00.808 [2024-10-13 04:08:53.802210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.802260] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:00.808 [2024-10-13 04:08:53.802273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.802282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:00.808 [2024-10-13 04:08:53.802293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:00.808 [2024-10-13 04:08:53.802301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.825713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.825832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:00.808 [2024-10-13 04:08:53.825887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.393 ms 00:18:00.808 [2024-10-13 04:08:53.825910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.826043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.808 [2024-10-13 04:08:53.826073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:00.808 [2024-10-13 04:08:53.826117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:00.808 [2024-10-13 04:08:53.826140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.808 [2024-10-13 04:08:53.827079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.617 ms, result 0 00:18:01.743  [2024-10-13T04:08:55.838Z] Copying: 45/1024 [MB] (45 MBps) [2024-10-13T04:08:57.214Z] Copying: 91/1024 [MB] (45 MBps) [2024-10-13T04:08:58.148Z] Copying: 135/1024 [MB] (44 MBps) [2024-10-13T04:08:59.082Z] Copying: 185/1024 [MB] (49 MBps) [2024-10-13T04:09:00.046Z] Copying: 231/1024 [MB] (46 MBps) [2024-10-13T04:09:00.982Z] Copying: 277/1024 [MB] (46 MBps) [2024-10-13T04:09:01.915Z] Copying: 323/1024 [MB] (45 MBps) [2024-10-13T04:09:02.851Z] Copying: 373/1024 [MB] (50 MBps) [2024-10-13T04:09:04.227Z] Copying: 426/1024 [MB] (53 MBps) [2024-10-13T04:09:05.160Z] Copying: 480/1024 [MB] (53 MBps) [2024-10-13T04:09:06.096Z] Copying: 534/1024 [MB] (53 MBps) [2024-10-13T04:09:07.030Z] Copying: 565/1024 [MB] (30 MBps) [2024-10-13T04:09:07.964Z] Copying: 595/1024 [MB] (29 MBps) [2024-10-13T04:09:08.896Z] Copying: 633/1024 [MB] (38 MBps) [2024-10-13T04:09:10.269Z] Copying: 679/1024 [MB] (46 MBps) [2024-10-13T04:09:11.202Z] Copying: 726/1024 [MB] (46 MBps) [2024-10-13T04:09:12.135Z] Copying: 772/1024 [MB] (46 MBps) [2024-10-13T04:09:13.068Z] Copying: 818/1024 [MB] (46 MBps) [2024-10-13T04:09:14.001Z] Copying: 864/1024 [MB] (45 MBps) [2024-10-13T04:09:14.933Z] Copying: 911/1024 [MB] (46 MBps) [2024-10-13T04:09:15.865Z] Copying: 958/1024 [MB] (46 MBps) [2024-10-13T04:09:16.430Z] Copying: 1004/1024 [MB] (45 MBps) [2024-10-13T04:09:16.430Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-13 04:09:16.272124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.270 [2024-10-13 04:09:16.272168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:23.270 [2024-10-13 04:09:16.272182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:23.270 [2024-10-13 04:09:16.272190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.270 [2024-10-13 04:09:16.272209] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:23.270 [2024-10-13 04:09:16.274790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.270 [2024-10-13 04:09:16.274821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:23.271 [2024-10-13 04:09:16.274832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:18:23.271 [2024-10-13 04:09:16.274840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.276141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.276177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:23.271 [2024-10-13 04:09:16.276186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:18:23.271 [2024-10-13 04:09:16.276193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.288559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.288591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:23.271 [2024-10-13 04:09:16.288600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.351 ms 00:18:23.271 [2024-10-13 04:09:16.288608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.294710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.294736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:23.271 [2024-10-13 04:09:16.294751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.063 ms 00:18:23.271 [2024-10-13 04:09:16.294759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.318201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.318339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:23.271 [2024-10-13 04:09:16.318356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.388 ms 00:18:23.271 [2024-10-13 04:09:16.318363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.332122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.332154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:23.271 [2024-10-13 04:09:16.332166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.730 ms 00:18:23.271 [2024-10-13 04:09:16.332173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.332291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.332301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:23.271 [2024-10-13 04:09:16.332309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:23.271 [2024-10-13 04:09:16.332321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.354848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.354888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:23.271 [2024-10-13 04:09:16.354898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.513 ms 00:18:23.271 [2024-10-13 04:09:16.354905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.377548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.377577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:23.271 [2024-10-13 04:09:16.377594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.614 ms 00:18:23.271 [2024-10-13 04:09:16.377601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.399731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.399852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:23.271 [2024-10-13 04:09:16.399866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.084 ms 00:18:23.271 [2024-10-13 04:09:16.399873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.421885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.271 [2024-10-13 04:09:16.421912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:23.271 [2024-10-13 04:09:16.421921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.952 ms 00:18:23.271 [2024-10-13 04:09:16.421928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.271 [2024-10-13 04:09:16.421957] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:23.271 [2024-10-13 04:09:16.421969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.421979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.421986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.421994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:23.271 [2024-10-13 04:09:16.422402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:23.272 [2024-10-13 04:09:16.422728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:23.272 [2024-10-13 04:09:16.422736] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a628b0-e563-49b5-b153-de474dcf74ca 00:18:23.272 [2024-10-13 04:09:16.422747] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:23.272 [2024-10-13 04:09:16.422754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:23.272 [2024-10-13 04:09:16.422763] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:23.272 [2024-10-13 04:09:16.422770] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:23.272 [2024-10-13 04:09:16.422777] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:23.272 [2024-10-13 04:09:16.422792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:23.272 [2024-10-13 04:09:16.422799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:23.272 [2024-10-13 04:09:16.422811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:23.272 [2024-10-13 04:09:16.422818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:23.272 [2024-10-13 04:09:16.422825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.272 [2024-10-13 04:09:16.422832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:23.272 [2024-10-13 04:09:16.422840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:18:23.272 [2024-10-13 04:09:16.422847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.529 [2024-10-13 04:09:16.435136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.529 [2024-10-13 04:09:16.435163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:23.529 [2024-10-13 04:09:16.435173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.274 ms 00:18:23.529 [2024-10-13 04:09:16.435181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.529 [2024-10-13 04:09:16.435509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.529 [2024-10-13 04:09:16.435518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:23.529 [2024-10-13 04:09:16.435525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:23.529 [2024-10-13 04:09:16.435532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.529 [2024-10-13 04:09:16.467868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.529 [2024-10-13 04:09:16.467897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.529 [2024-10-13 04:09:16.467906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.529 [2024-10-13 04:09:16.467914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.529 [2024-10-13 04:09:16.467964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.529 [2024-10-13 04:09:16.467972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.529 [2024-10-13 04:09:16.467980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.529 [2024-10-13 04:09:16.467986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.529 [2024-10-13 04:09:16.468036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.529 [2024-10-13 04:09:16.468050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.529 [2024-10-13 04:09:16.468058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.529 [2024-10-13 04:09:16.468065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.529 [2024-10-13 04:09:16.468079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.529 [2024-10-13 04:09:16.468086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.529 [2024-10-13 04:09:16.468102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.529 [2024-10-13 04:09:16.468110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.529 [2024-10-13 04:09:16.545072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.545216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:23.530 [2024-10-13 04:09:16.545233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.545241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.607905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.608047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:23.530 [2024-10-13 04:09:16.608062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.608070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.608143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.608152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.530 [2024-10-13 04:09:16.608165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.608173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.608205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.608214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.530 [2024-10-13 04:09:16.608222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.608230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.608313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.608322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.530 [2024-10-13 04:09:16.608333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.608341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.608370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.608379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:23.530 [2024-10-13 04:09:16.608387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.608394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.608426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.608434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.530 [2024-10-13 04:09:16.608442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.608451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.608490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:23.530 [2024-10-13 04:09:16.608499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.530 [2024-10-13 04:09:16.608507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:23.530 [2024-10-13 04:09:16.608514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.530 [2024-10-13 04:09:16.608640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.469 ms, result 0 00:18:25.440 00:18:25.440 00:18:25.440 04:09:18 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:25.440 [2024-10-13 04:09:18.437716] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:18:25.440 [2024-10-13 04:09:18.438017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75040 ] 00:18:25.440 [2024-10-13 04:09:18.588236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.730 [2024-10-13 04:09:18.683390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.993 [2024-10-13 04:09:18.933190] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:25.993 [2024-10-13 04:09:18.933249] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:25.993 [2024-10-13 04:09:19.086166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.086346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:25.993 [2024-10-13 04:09:19.086365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:25.993 [2024-10-13 04:09:19.086380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.086435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.086445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.993 [2024-10-13 04:09:19.086454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:25.993 [2024-10-13 04:09:19.086463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.086482] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:25.993 [2024-10-13 04:09:19.087152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:25.993 [2024-10-13 04:09:19.087177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.087188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.993 [2024-10-13 04:09:19.087196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:18:25.993 [2024-10-13 04:09:19.087203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.088218] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:25.993 [2024-10-13 04:09:19.100147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.100179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:25.993 [2024-10-13 04:09:19.100191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.931 ms 00:18:25.993 [2024-10-13 04:09:19.100199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.100249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.100258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:25.993 [2024-10-13 04:09:19.100269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:25.993 [2024-10-13 04:09:19.100277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.104950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.104978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.993 [2024-10-13 04:09:19.104987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:18:25.993 [2024-10-13 04:09:19.104995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.105069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.105079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.993 [2024-10-13 04:09:19.105086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:25.993 [2024-10-13 04:09:19.105094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.105132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.105141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:25.993 [2024-10-13 04:09:19.105148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:25.993 [2024-10-13 04:09:19.105155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.105176] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.993 [2024-10-13 04:09:19.108396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.108522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.993 [2024-10-13 04:09:19.108537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:18:25.993 [2024-10-13 04:09:19.108544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.108578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.108586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:25.993 [2024-10-13 04:09:19.108594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:25.993 [2024-10-13 04:09:19.108601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.108640] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:25.993 [2024-10-13 04:09:19.108659] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:25.993 [2024-10-13 04:09:19.108693] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:25.993 [2024-10-13 04:09:19.108709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:25.993 [2024-10-13 04:09:19.108812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:25.993 [2024-10-13 04:09:19.108822] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:25.993 [2024-10-13 04:09:19.108832] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:25.993 [2024-10-13 04:09:19.108842] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:25.993 [2024-10-13 04:09:19.108851] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:25.993 [2024-10-13 04:09:19.108858] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:25.993 [2024-10-13 04:09:19.108865] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:25.993 [2024-10-13 04:09:19.108872] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:25.993 [2024-10-13 04:09:19.108879] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:25.993 [2024-10-13 04:09:19.108887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.108896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:25.993 [2024-10-13 04:09:19.108904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:25.993 [2024-10-13 04:09:19.108911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.108992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.993 [2024-10-13 04:09:19.109000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:25.993 [2024-10-13 04:09:19.109007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:25.993 [2024-10-13 04:09:19.109014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.993 [2024-10-13 04:09:19.109125] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:25.993 [2024-10-13 04:09:19.109136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:25.993 [2024-10-13 04:09:19.109146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.993 [2024-10-13 04:09:19.109153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:25.993 [2024-10-13 04:09:19.109167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:25.993 [2024-10-13 04:09:19.109180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:25.993 [2024-10-13 04:09:19.109189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.993 [2024-10-13 04:09:19.109202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:25.993 [2024-10-13 04:09:19.109209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:25.993 [2024-10-13 04:09:19.109215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.993 [2024-10-13 04:09:19.109221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:25.993 [2024-10-13 04:09:19.109228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:25.993 [2024-10-13 04:09:19.109240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:25.993 [2024-10-13 04:09:19.109254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:25.993 [2024-10-13 04:09:19.109260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:25.993 [2024-10-13 04:09:19.109273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.993 [2024-10-13 04:09:19.109286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:25.993 [2024-10-13 04:09:19.109292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.993 [2024-10-13 04:09:19.109304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:25.993 [2024-10-13 04:09:19.109310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.993 [2024-10-13 04:09:19.109323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:25.993 [2024-10-13 04:09:19.109329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:25.993 [2024-10-13 04:09:19.109335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.994 [2024-10-13 04:09:19.109341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:25.994 [2024-10-13 04:09:19.109348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:25.994 [2024-10-13 04:09:19.109354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.994 [2024-10-13 04:09:19.109360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:25.994 [2024-10-13 04:09:19.109366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:25.994 [2024-10-13 04:09:19.109372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.994 [2024-10-13 04:09:19.109378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:25.994 [2024-10-13 04:09:19.109384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:25.994 [2024-10-13 04:09:19.109391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.994 [2024-10-13 04:09:19.109397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:25.994 [2024-10-13 04:09:19.109403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:25.994 [2024-10-13 04:09:19.109409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.994 [2024-10-13 04:09:19.109415] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:25.994 [2024-10-13 04:09:19.109423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:25.994 [2024-10-13 04:09:19.109429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.994 [2024-10-13 04:09:19.109436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.994 [2024-10-13 04:09:19.109444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:25.994 [2024-10-13 04:09:19.109451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:25.994 [2024-10-13 04:09:19.109458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:25.994 [2024-10-13 04:09:19.109464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:25.994 [2024-10-13 04:09:19.109470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:25.994 [2024-10-13 04:09:19.109477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:25.994 [2024-10-13 04:09:19.109485] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:25.994 [2024-10-13 04:09:19.109493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.994 [2024-10-13 04:09:19.109502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:25.994 [2024-10-13 04:09:19.109509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:25.994 [2024-10-13 04:09:19.109516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:25.994 [2024-10-13 04:09:19.109523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:25.994 [2024-10-13 04:09:19.109530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:25.994 [2024-10-13 04:09:19.109537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:25.994 [2024-10-13 04:09:19.109543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:25.994 [2024-10-13 04:09:19.109550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:25.994 [2024-10-13 04:09:19.109557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:25.994 [2024-10-13 04:09:19.109564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:25.994 [2024-10-13 04:09:19.109571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:25.994 [2024-10-13 04:09:19.109577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:25.994 [2024-10-13 04:09:19.109584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:25.994 [2024-10-13 04:09:19.109591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:25.994 [2024-10-13 04:09:19.109598] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:25.994 [2024-10-13 04:09:19.109605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.994 [2024-10-13 04:09:19.109628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:25.994 [2024-10-13 04:09:19.109635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:25.994 [2024-10-13 04:09:19.109642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:25.994 [2024-10-13 04:09:19.109649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:25.994 [2024-10-13 04:09:19.109656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.994 [2024-10-13 04:09:19.109663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:25.994 [2024-10-13 04:09:19.109671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:18:25.994 [2024-10-13 04:09:19.109678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.994 [2024-10-13 04:09:19.135005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.994 [2024-10-13 04:09:19.135137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.994 [2024-10-13 04:09:19.135152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.285 ms 00:18:25.994 [2024-10-13 04:09:19.135160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.994 [2024-10-13 04:09:19.135241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.994 [2024-10-13 04:09:19.135254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:25.994 [2024-10-13 04:09:19.135263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:25.994 [2024-10-13 04:09:19.135270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.171771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.171810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.252 [2024-10-13 04:09:19.171823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.452 ms 00:18:26.252 [2024-10-13 04:09:19.171831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.171869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.171878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.252 [2024-10-13 04:09:19.171886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:26.252 [2024-10-13 04:09:19.171893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.172240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.172256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.252 [2024-10-13 04:09:19.172265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:18:26.252 [2024-10-13 04:09:19.172272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.172391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.172400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.252 [2024-10-13 04:09:19.172408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:26.252 [2024-10-13 04:09:19.172414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.185240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.185269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.252 [2024-10-13 04:09:19.185279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.806 ms 00:18:26.252 [2024-10-13 04:09:19.185287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.197347] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:26.252 [2024-10-13 04:09:19.197379] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:26.252 [2024-10-13 04:09:19.197390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.197397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:26.252 [2024-10-13 04:09:19.197405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.000 ms 00:18:26.252 [2024-10-13 04:09:19.197412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.221085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.221119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:26.252 [2024-10-13 04:09:19.221133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:18:26.252 [2024-10-13 04:09:19.221140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.232490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.232629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:26.252 [2024-10-13 04:09:19.232645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.310 ms 00:18:26.252 [2024-10-13 04:09:19.232652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.243772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.243799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:26.252 [2024-10-13 04:09:19.243808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.093 ms 00:18:26.252 [2024-10-13 04:09:19.243815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.244410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.244428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:26.252 [2024-10-13 04:09:19.244437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:18:26.252 [2024-10-13 04:09:19.244445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.298548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.298600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:26.252 [2024-10-13 04:09:19.298630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.085 ms 00:18:26.252 [2024-10-13 04:09:19.298643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.308841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:26.252 [2024-10-13 04:09:19.311123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.311151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:26.252 [2024-10-13 04:09:19.311163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.429 ms 00:18:26.252 [2024-10-13 04:09:19.311173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.311263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.311273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:26.252 [2024-10-13 04:09:19.311282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:26.252 [2024-10-13 04:09:19.311289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.311353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.311363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:26.252 [2024-10-13 04:09:19.311371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:26.252 [2024-10-13 04:09:19.311378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.311396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.311404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:26.252 [2024-10-13 04:09:19.311411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:26.252 [2024-10-13 04:09:19.311418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.252 [2024-10-13 04:09:19.311447] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:26.252 [2024-10-13 04:09:19.311457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.252 [2024-10-13 04:09:19.311466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:26.253 [2024-10-13 04:09:19.311474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:26.253 [2024-10-13 04:09:19.311482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.253 [2024-10-13 04:09:19.334190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.253 [2024-10-13 04:09:19.334331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:26.253 [2024-10-13 04:09:19.334349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.692 ms 00:18:26.253 [2024-10-13 04:09:19.334356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.253 [2024-10-13 04:09:19.334432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.253 [2024-10-13 04:09:19.334442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:26.253 [2024-10-13 04:09:19.334450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:26.253 [2024-10-13 04:09:19.334457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.253 [2024-10-13 04:09:19.335386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 248.823 ms, result 0 00:18:27.627  [2024-10-13T04:09:21.720Z] Copying: 46/1024 [MB] (46 MBps) [2024-10-13T04:09:22.654Z] Copying: 94/1024 [MB] (47 MBps) [2024-10-13T04:09:23.589Z] Copying: 144/1024 [MB] (49 MBps) [2024-10-13T04:09:24.523Z] Copying: 192/1024 [MB] (48 MBps) [2024-10-13T04:09:25.897Z] Copying: 238/1024 [MB] (45 MBps) [2024-10-13T04:09:26.831Z] Copying: 288/1024 [MB] (50 MBps) [2024-10-13T04:09:27.769Z] Copying: 335/1024 [MB] (46 MBps) [2024-10-13T04:09:28.704Z] Copying: 383/1024 [MB] (47 MBps) [2024-10-13T04:09:29.637Z] Copying: 431/1024 [MB] (48 MBps) [2024-10-13T04:09:30.622Z] Copying: 478/1024 [MB] (47 MBps) [2024-10-13T04:09:31.556Z] Copying: 527/1024 [MB] (48 MBps) [2024-10-13T04:09:32.930Z] Copying: 573/1024 [MB] (46 MBps) [2024-10-13T04:09:33.864Z] Copying: 622/1024 [MB] (49 MBps) [2024-10-13T04:09:34.798Z] Copying: 673/1024 [MB] (51 MBps) [2024-10-13T04:09:35.734Z] Copying: 720/1024 [MB] (46 MBps) [2024-10-13T04:09:36.667Z] Copying: 769/1024 [MB] (49 MBps) [2024-10-13T04:09:37.602Z] Copying: 817/1024 [MB] (48 MBps) [2024-10-13T04:09:38.534Z] Copying: 866/1024 [MB] (48 MBps) [2024-10-13T04:09:39.908Z] Copying: 912/1024 [MB] (46 MBps) [2024-10-13T04:09:40.841Z] Copying: 960/1024 [MB] (48 MBps) [2024-10-13T04:09:40.841Z] Copying: 1010/1024 [MB] (49 MBps) [2024-10-13T04:09:41.409Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-10-13 04:09:41.208866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.208914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:48.249 [2024-10-13 04:09:41.208928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:48.249 [2024-10-13 04:09:41.208936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.208957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:48.249 [2024-10-13 04:09:41.211574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.211602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:48.249 [2024-10-13 04:09:41.211620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:18:48.249 [2024-10-13 04:09:41.211629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.211848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.211862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:48.249 [2024-10-13 04:09:41.211870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:18:48.249 [2024-10-13 04:09:41.211878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.215309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.215329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:48.249 [2024-10-13 04:09:41.215339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:18:48.249 [2024-10-13 04:09:41.215348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.221527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.221558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:48.249 [2024-10-13 04:09:41.221569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:18:48.249 [2024-10-13 04:09:41.221577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.247275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.247314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:48.249 [2024-10-13 04:09:41.247325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.626 ms 00:18:48.249 [2024-10-13 04:09:41.247332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.261391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.261427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:48.249 [2024-10-13 04:09:41.261438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.037 ms 00:18:48.249 [2024-10-13 04:09:41.261445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.261571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.261582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:48.249 [2024-10-13 04:09:41.261596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:48.249 [2024-10-13 04:09:41.261603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.285370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.285531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:48.249 [2024-10-13 04:09:41.285548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.731 ms 00:18:48.249 [2024-10-13 04:09:41.285556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.310145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.310186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:48.249 [2024-10-13 04:09:41.310196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.567 ms 00:18:48.249 [2024-10-13 04:09:41.310203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.332454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.332484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:48.249 [2024-10-13 04:09:41.332493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.232 ms 00:18:48.249 [2024-10-13 04:09:41.332501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.354432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.249 [2024-10-13 04:09:41.354460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:48.249 [2024-10-13 04:09:41.354469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.892 ms 00:18:48.249 [2024-10-13 04:09:41.354477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.249 [2024-10-13 04:09:41.354494] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:48.249 [2024-10-13 04:09:41.354507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:48.249 [2024-10-13 04:09:41.354761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.354996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:48.250 [2024-10-13 04:09:41.355255] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:48.250 [2024-10-13 04:09:41.355266] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a628b0-e563-49b5-b153-de474dcf74ca 00:18:48.250 [2024-10-13 04:09:41.355274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:48.250 [2024-10-13 04:09:41.355283] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:48.250 [2024-10-13 04:09:41.355290] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:48.250 [2024-10-13 04:09:41.355297] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:48.250 [2024-10-13 04:09:41.355304] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:48.250 [2024-10-13 04:09:41.355312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:48.250 [2024-10-13 04:09:41.355324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:48.250 [2024-10-13 04:09:41.355331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:48.250 [2024-10-13 04:09:41.355337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:48.250 [2024-10-13 04:09:41.355344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.250 [2024-10-13 04:09:41.355351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:48.250 [2024-10-13 04:09:41.355359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:18:48.250 [2024-10-13 04:09:41.355366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.250 [2024-10-13 04:09:41.367522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.250 [2024-10-13 04:09:41.367550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:48.250 [2024-10-13 04:09:41.367560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.140 ms 00:18:48.250 [2024-10-13 04:09:41.367568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.250 [2024-10-13 04:09:41.367932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.250 [2024-10-13 04:09:41.367944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:48.250 [2024-10-13 04:09:41.367952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:18:48.250 [2024-10-13 04:09:41.367959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.250 [2024-10-13 04:09:41.400192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.250 [2024-10-13 04:09:41.400229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.250 [2024-10-13 04:09:41.400240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.250 [2024-10-13 04:09:41.400247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.250 [2024-10-13 04:09:41.400302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.250 [2024-10-13 04:09:41.400310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.250 [2024-10-13 04:09:41.400318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.250 [2024-10-13 04:09:41.400325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.250 [2024-10-13 04:09:41.400380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.250 [2024-10-13 04:09:41.400389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.250 [2024-10-13 04:09:41.400397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.250 [2024-10-13 04:09:41.400404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.250 [2024-10-13 04:09:41.400418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.251 [2024-10-13 04:09:41.400426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.251 [2024-10-13 04:09:41.400433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.251 [2024-10-13 04:09:41.400440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.477573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.477642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.509 [2024-10-13 04:09:41.477654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.477662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.539958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.540008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.509 [2024-10-13 04:09:41.540021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.540030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.540093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.540107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.509 [2024-10-13 04:09:41.540115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.540130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.540164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.540172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.509 [2024-10-13 04:09:41.540180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.540187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.540356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.540368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.509 [2024-10-13 04:09:41.540375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.540383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.540408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.540417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:48.509 [2024-10-13 04:09:41.540424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.540432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.540463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.540472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.509 [2024-10-13 04:09:41.540482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.540489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.540525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.509 [2024-10-13 04:09:41.540535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.509 [2024-10-13 04:09:41.540542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.509 [2024-10-13 04:09:41.540549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.509 [2024-10-13 04:09:41.540679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.765 ms, result 0 00:18:49.075 00:18:49.075 00:18:49.075 04:09:42 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:50.975 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:18:50.975 04:09:43 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:18:50.975 [2024-10-13 04:09:43.877990] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:18:50.975 [2024-10-13 04:09:43.878114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75313 ] 00:18:50.975 [2024-10-13 04:09:44.028976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.975 [2024-10-13 04:09:44.126179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.234 [2024-10-13 04:09:44.377041] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:51.234 [2024-10-13 04:09:44.377101] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:51.494 [2024-10-13 04:09:44.530137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.530179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:51.494 [2024-10-13 04:09:44.530193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:51.494 [2024-10-13 04:09:44.530205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.530246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.530256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:51.494 [2024-10-13 04:09:44.530264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:51.494 [2024-10-13 04:09:44.530274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.530290] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:51.494 [2024-10-13 04:09:44.530970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:51.494 [2024-10-13 04:09:44.530991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.531001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:51.494 [2024-10-13 04:09:44.531009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:18:51.494 [2024-10-13 04:09:44.531016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.532056] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:51.494 [2024-10-13 04:09:44.544269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.544298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:51.494 [2024-10-13 04:09:44.544310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.215 ms 00:18:51.494 [2024-10-13 04:09:44.544318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.544370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.544379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:51.494 [2024-10-13 04:09:44.544390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:51.494 [2024-10-13 04:09:44.544397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.549215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.549250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:51.494 [2024-10-13 04:09:44.549261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.761 ms 00:18:51.494 [2024-10-13 04:09:44.549273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.549375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.549384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:51.494 [2024-10-13 04:09:44.549396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:51.494 [2024-10-13 04:09:44.549403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.549443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.549452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:51.494 [2024-10-13 04:09:44.549460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:51.494 [2024-10-13 04:09:44.549472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.549499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:51.494 [2024-10-13 04:09:44.552723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.552749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:51.494 [2024-10-13 04:09:44.552758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:18:51.494 [2024-10-13 04:09:44.552765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.552794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.552802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:51.494 [2024-10-13 04:09:44.552810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:51.494 [2024-10-13 04:09:44.552816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.552835] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:51.494 [2024-10-13 04:09:44.552853] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:51.494 [2024-10-13 04:09:44.552887] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:51.494 [2024-10-13 04:09:44.552903] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:51.494 [2024-10-13 04:09:44.553006] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:51.494 [2024-10-13 04:09:44.553016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:51.494 [2024-10-13 04:09:44.553026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:51.494 [2024-10-13 04:09:44.553035] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553043] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553052] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:51.494 [2024-10-13 04:09:44.553059] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:51.494 [2024-10-13 04:09:44.553066] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:51.494 [2024-10-13 04:09:44.553073] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:51.494 [2024-10-13 04:09:44.553081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.553090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:51.494 [2024-10-13 04:09:44.553097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:51.494 [2024-10-13 04:09:44.553104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.553186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.494 [2024-10-13 04:09:44.553194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:51.494 [2024-10-13 04:09:44.553201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:51.494 [2024-10-13 04:09:44.553208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.494 [2024-10-13 04:09:44.553308] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:51.494 [2024-10-13 04:09:44.553317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:51.494 [2024-10-13 04:09:44.553327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:51.494 [2024-10-13 04:09:44.553349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:51.494 [2024-10-13 04:09:44.553370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.494 [2024-10-13 04:09:44.553384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:51.494 [2024-10-13 04:09:44.553390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:51.494 [2024-10-13 04:09:44.553396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.494 [2024-10-13 04:09:44.553403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:51.494 [2024-10-13 04:09:44.553410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:51.494 [2024-10-13 04:09:44.553422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:51.494 [2024-10-13 04:09:44.553435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:51.494 [2024-10-13 04:09:44.553454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:51.494 [2024-10-13 04:09:44.553474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:51.494 [2024-10-13 04:09:44.553493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:51.494 [2024-10-13 04:09:44.553499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.494 [2024-10-13 04:09:44.553506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:51.495 [2024-10-13 04:09:44.553512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:51.495 [2024-10-13 04:09:44.553518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.495 [2024-10-13 04:09:44.553526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:51.495 [2024-10-13 04:09:44.553532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:51.495 [2024-10-13 04:09:44.553539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.495 [2024-10-13 04:09:44.553545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:51.495 [2024-10-13 04:09:44.553551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:51.495 [2024-10-13 04:09:44.553557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.495 [2024-10-13 04:09:44.553564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:51.495 [2024-10-13 04:09:44.553571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:51.495 [2024-10-13 04:09:44.553577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.495 [2024-10-13 04:09:44.553583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:51.495 [2024-10-13 04:09:44.553590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:51.495 [2024-10-13 04:09:44.553596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.495 [2024-10-13 04:09:44.553609] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:51.495 [2024-10-13 04:09:44.553628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:51.495 [2024-10-13 04:09:44.553635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.495 [2024-10-13 04:09:44.553642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.495 [2024-10-13 04:09:44.553650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:51.495 [2024-10-13 04:09:44.553656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:51.495 [2024-10-13 04:09:44.553663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:51.495 [2024-10-13 04:09:44.553669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:51.495 [2024-10-13 04:09:44.553675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:51.495 [2024-10-13 04:09:44.553682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:51.495 [2024-10-13 04:09:44.553690] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:51.495 [2024-10-13 04:09:44.553699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.495 [2024-10-13 04:09:44.553708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:51.495 [2024-10-13 04:09:44.553715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:51.495 [2024-10-13 04:09:44.553723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:51.495 [2024-10-13 04:09:44.553731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:51.495 [2024-10-13 04:09:44.553738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:51.495 [2024-10-13 04:09:44.553745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:51.495 [2024-10-13 04:09:44.553752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:51.495 [2024-10-13 04:09:44.553759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:51.495 [2024-10-13 04:09:44.553766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:51.495 [2024-10-13 04:09:44.553773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:51.495 [2024-10-13 04:09:44.553780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:51.495 [2024-10-13 04:09:44.553787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:51.495 [2024-10-13 04:09:44.553794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:51.495 [2024-10-13 04:09:44.553801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:51.495 [2024-10-13 04:09:44.553808] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:51.495 [2024-10-13 04:09:44.553815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.495 [2024-10-13 04:09:44.553826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:51.495 [2024-10-13 04:09:44.553832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:51.495 [2024-10-13 04:09:44.553839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:51.495 [2024-10-13 04:09:44.553846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:51.495 [2024-10-13 04:09:44.553853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.553860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:51.495 [2024-10-13 04:09:44.553868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:18:51.495 [2024-10-13 04:09:44.553874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.579418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.579577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.495 [2024-10-13 04:09:44.579594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.491 ms 00:18:51.495 [2024-10-13 04:09:44.579602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.579702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.579715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:51.495 [2024-10-13 04:09:44.579723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:51.495 [2024-10-13 04:09:44.579730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.623406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.623449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.495 [2024-10-13 04:09:44.623461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.624 ms 00:18:51.495 [2024-10-13 04:09:44.623468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.623511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.623520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.495 [2024-10-13 04:09:44.623528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:51.495 [2024-10-13 04:09:44.623536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.623907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.623923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.495 [2024-10-13 04:09:44.623933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:18:51.495 [2024-10-13 04:09:44.623940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.624062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.624071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.495 [2024-10-13 04:09:44.624079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:51.495 [2024-10-13 04:09:44.624086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.637192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.637223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.495 [2024-10-13 04:09:44.637233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.084 ms 00:18:51.495 [2024-10-13 04:09:44.637242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.495 [2024-10-13 04:09:44.649650] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:51.495 [2024-10-13 04:09:44.649685] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:51.495 [2024-10-13 04:09:44.649696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.495 [2024-10-13 04:09:44.649705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:51.495 [2024-10-13 04:09:44.649713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.360 ms 00:18:51.495 [2024-10-13 04:09:44.649721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.673881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.673918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:51.768 [2024-10-13 04:09:44.673933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.121 ms 00:18:51.768 [2024-10-13 04:09:44.673941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.685490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.685523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:51.768 [2024-10-13 04:09:44.685533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.507 ms 00:18:51.768 [2024-10-13 04:09:44.685540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.696427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.696577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:51.768 [2024-10-13 04:09:44.696594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.851 ms 00:18:51.768 [2024-10-13 04:09:44.696601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.697217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.697237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:51.768 [2024-10-13 04:09:44.697246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:18:51.768 [2024-10-13 04:09:44.697253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.751187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.751397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:51.768 [2024-10-13 04:09:44.751416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.915 ms 00:18:51.768 [2024-10-13 04:09:44.751429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.761663] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:51.768 [2024-10-13 04:09:44.764181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.764210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:51.768 [2024-10-13 04:09:44.764223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.657 ms 00:18:51.768 [2024-10-13 04:09:44.764231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.764320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.764331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:51.768 [2024-10-13 04:09:44.764340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:51.768 [2024-10-13 04:09:44.764347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.764412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.764423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:51.768 [2024-10-13 04:09:44.764431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:51.768 [2024-10-13 04:09:44.764438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.764457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.764465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:51.768 [2024-10-13 04:09:44.764472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.768 [2024-10-13 04:09:44.764480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.764509] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:51.768 [2024-10-13 04:09:44.764518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.764528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:51.768 [2024-10-13 04:09:44.764536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:51.768 [2024-10-13 04:09:44.764543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.787479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.787514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:51.768 [2024-10-13 04:09:44.787525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:18:51.768 [2024-10-13 04:09:44.787533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.787602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.768 [2024-10-13 04:09:44.787611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:51.768 [2024-10-13 04:09:44.787636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:51.768 [2024-10-13 04:09:44.787643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.768 [2024-10-13 04:09:44.788579] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.032 ms, result 0 00:18:52.736  [2024-10-13T04:09:46.831Z] Copying: 47/1024 [MB] (47 MBps) [2024-10-13T04:09:48.206Z] Copying: 93/1024 [MB] (46 MBps) [2024-10-13T04:09:49.142Z] Copying: 139/1024 [MB] (46 MBps) [2024-10-13T04:09:50.079Z] Copying: 186/1024 [MB] (46 MBps) [2024-10-13T04:09:51.014Z] Copying: 232/1024 [MB] (46 MBps) [2024-10-13T04:09:51.948Z] Copying: 279/1024 [MB] (46 MBps) [2024-10-13T04:09:52.880Z] Copying: 325/1024 [MB] (46 MBps) [2024-10-13T04:09:53.812Z] Copying: 371/1024 [MB] (46 MBps) [2024-10-13T04:09:55.182Z] Copying: 418/1024 [MB] (46 MBps) [2024-10-13T04:09:56.115Z] Copying: 464/1024 [MB] (46 MBps) [2024-10-13T04:09:57.047Z] Copying: 511/1024 [MB] (46 MBps) [2024-10-13T04:09:57.979Z] Copying: 557/1024 [MB] (46 MBps) [2024-10-13T04:09:58.909Z] Copying: 604/1024 [MB] (46 MBps) [2024-10-13T04:09:59.862Z] Copying: 650/1024 [MB] (46 MBps) [2024-10-13T04:10:01.231Z] Copying: 696/1024 [MB] (46 MBps) [2024-10-13T04:10:02.165Z] Copying: 744/1024 [MB] (47 MBps) [2024-10-13T04:10:03.098Z] Copying: 798/1024 [MB] (54 MBps) [2024-10-13T04:10:04.030Z] Copying: 843/1024 [MB] (45 MBps) [2024-10-13T04:10:04.964Z] Copying: 889/1024 [MB] (46 MBps) [2024-10-13T04:10:05.897Z] Copying: 936/1024 [MB] (46 MBps) [2024-10-13T04:10:06.830Z] Copying: 986/1024 [MB] (50 MBps) [2024-10-13T04:10:07.396Z] Copying: 1023/1024 [MB] (36 MBps) [2024-10-13T04:10:07.396Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-13 04:10:07.318401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.236 [2024-10-13 04:10:07.318604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.236 [2024-10-13 04:10:07.318635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:14.236 [2024-10-13 04:10:07.318644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.236 [2024-10-13 04:10:07.320180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.236 [2024-10-13 04:10:07.327199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.236 [2024-10-13 04:10:07.327230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.236 [2024-10-13 04:10:07.327242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.983 ms 00:19:14.236 [2024-10-13 04:10:07.327249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.236 [2024-10-13 04:10:07.336665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.236 [2024-10-13 04:10:07.336787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.236 [2024-10-13 04:10:07.336803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.312 ms 00:19:14.236 [2024-10-13 04:10:07.336810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.236 [2024-10-13 04:10:07.354355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.236 [2024-10-13 04:10:07.354392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.236 [2024-10-13 04:10:07.354402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.528 ms 00:19:14.236 [2024-10-13 04:10:07.354409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.236 [2024-10-13 04:10:07.360710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.236 [2024-10-13 04:10:07.360736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.236 [2024-10-13 04:10:07.360745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.274 ms 00:19:14.236 [2024-10-13 04:10:07.360753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.236 [2024-10-13 04:10:07.383913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.236 [2024-10-13 04:10:07.384053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.236 [2024-10-13 04:10:07.384070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.094 ms 00:19:14.236 [2024-10-13 04:10:07.384078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.495 [2024-10-13 04:10:07.397499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.495 [2024-10-13 04:10:07.397530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.495 [2024-10-13 04:10:07.397546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.392 ms 00:19:14.495 [2024-10-13 04:10:07.397553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.495 [2024-10-13 04:10:07.447445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.495 [2024-10-13 04:10:07.447602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.495 [2024-10-13 04:10:07.447630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.857 ms 00:19:14.495 [2024-10-13 04:10:07.447640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.495 [2024-10-13 04:10:07.470685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.495 [2024-10-13 04:10:07.470720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:14.495 [2024-10-13 04:10:07.470731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.026 ms 00:19:14.495 [2024-10-13 04:10:07.470738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.495 [2024-10-13 04:10:07.492826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.495 [2024-10-13 04:10:07.492864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:14.495 [2024-10-13 04:10:07.492874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.058 ms 00:19:14.495 [2024-10-13 04:10:07.492881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.495 [2024-10-13 04:10:07.515297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.495 [2024-10-13 04:10:07.515327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.495 [2024-10-13 04:10:07.515337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.386 ms 00:19:14.495 [2024-10-13 04:10:07.515344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.495 [2024-10-13 04:10:07.537237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.495 [2024-10-13 04:10:07.537267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.495 [2024-10-13 04:10:07.537277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.840 ms 00:19:14.495 [2024-10-13 04:10:07.537284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.495 [2024-10-13 04:10:07.537313] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.495 [2024-10-13 04:10:07.537327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120064 / 261120 wr_cnt: 1 state: open 00:19:14.495 [2024-10-13 04:10:07.537337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.495 [2024-10-13 04:10:07.537637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.537995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.496 [2024-10-13 04:10:07.538099] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.496 [2024-10-13 04:10:07.538107] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a628b0-e563-49b5-b153-de474dcf74ca 00:19:14.496 [2024-10-13 04:10:07.538114] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120064 00:19:14.496 [2024-10-13 04:10:07.538121] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121024 00:19:14.496 [2024-10-13 04:10:07.538128] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120064 00:19:14.496 [2024-10-13 04:10:07.538135] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:19:14.496 [2024-10-13 04:10:07.538142] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.496 [2024-10-13 04:10:07.538150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.496 [2024-10-13 04:10:07.538164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.496 [2024-10-13 04:10:07.538171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.496 [2024-10-13 04:10:07.538177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.496 [2024-10-13 04:10:07.538184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.496 [2024-10-13 04:10:07.538193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.496 [2024-10-13 04:10:07.538201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:19:14.496 [2024-10-13 04:10:07.538208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.496 [2024-10-13 04:10:07.550377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.496 [2024-10-13 04:10:07.550404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.496 [2024-10-13 04:10:07.550414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.153 ms 00:19:14.496 [2024-10-13 04:10:07.550422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.496 [2024-10-13 04:10:07.550777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.496 [2024-10-13 04:10:07.550786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.496 [2024-10-13 04:10:07.550794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:19:14.496 [2024-10-13 04:10:07.550802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.496 [2024-10-13 04:10:07.582978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.496 [2024-10-13 04:10:07.583015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.496 [2024-10-13 04:10:07.583026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.496 [2024-10-13 04:10:07.583038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.496 [2024-10-13 04:10:07.583094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.496 [2024-10-13 04:10:07.583102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.496 [2024-10-13 04:10:07.583109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.496 [2024-10-13 04:10:07.583116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.496 [2024-10-13 04:10:07.583191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.496 [2024-10-13 04:10:07.583201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.496 [2024-10-13 04:10:07.583209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.496 [2024-10-13 04:10:07.583216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.496 [2024-10-13 04:10:07.583232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.496 [2024-10-13 04:10:07.583240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.496 [2024-10-13 04:10:07.583248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.496 [2024-10-13 04:10:07.583255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.659582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.659648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.755 [2024-10-13 04:10:07.659660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.659671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.722481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.722518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.755 [2024-10-13 04:10:07.722529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.722537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.722585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.722595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.755 [2024-10-13 04:10:07.722602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.722610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.722669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.722681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.755 [2024-10-13 04:10:07.722688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.722695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.722776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.722785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.755 [2024-10-13 04:10:07.722793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.722800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.722826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.722837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.755 [2024-10-13 04:10:07.722845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.722853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.722886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.722894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.755 [2024-10-13 04:10:07.722902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.722910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.722949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.755 [2024-10-13 04:10:07.722960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.755 [2024-10-13 04:10:07.722968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.755 [2024-10-13 04:10:07.722975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.755 [2024-10-13 04:10:07.723078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.207 ms, result 0 00:19:17.282 00:19:17.282 00:19:17.282 04:10:10 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:17.282 [2024-10-13 04:10:10.289755] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:19:17.282 [2024-10-13 04:10:10.289875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75576 ] 00:19:17.282 [2024-10-13 04:10:10.440206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.540 [2024-10-13 04:10:10.537863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.797 [2024-10-13 04:10:10.786818] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.797 [2024-10-13 04:10:10.787023] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.797 [2024-10-13 04:10:10.940286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.797 [2024-10-13 04:10:10.940340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.797 [2024-10-13 04:10:10.940352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:17.797 [2024-10-13 04:10:10.940364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.797 [2024-10-13 04:10:10.940406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.797 [2024-10-13 04:10:10.940416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.797 [2024-10-13 04:10:10.940424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:17.797 [2024-10-13 04:10:10.940433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.797 [2024-10-13 04:10:10.940450] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.797 [2024-10-13 04:10:10.941218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.797 [2024-10-13 04:10:10.941246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.797 [2024-10-13 04:10:10.941257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.797 [2024-10-13 04:10:10.941265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:19:17.797 [2024-10-13 04:10:10.941273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.797 [2024-10-13 04:10:10.942250] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:17.797 [2024-10-13 04:10:10.954532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.797 [2024-10-13 04:10:10.954564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:17.797 [2024-10-13 04:10:10.954576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.283 ms 00:19:17.797 [2024-10-13 04:10:10.954584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.797 [2024-10-13 04:10:10.954649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.797 [2024-10-13 04:10:10.954659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:17.797 [2024-10-13 04:10:10.954669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:17.797 [2024-10-13 04:10:10.954676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.959283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.056 [2024-10-13 04:10:10.959317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.056 [2024-10-13 04:10:10.959331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.547 ms 00:19:18.056 [2024-10-13 04:10:10.959339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.959430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.056 [2024-10-13 04:10:10.959439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.056 [2024-10-13 04:10:10.959447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:18.056 [2024-10-13 04:10:10.959454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.959495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.056 [2024-10-13 04:10:10.959504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:18.056 [2024-10-13 04:10:10.959517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:18.056 [2024-10-13 04:10:10.959527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.959550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:18.056 [2024-10-13 04:10:10.962826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.056 [2024-10-13 04:10:10.962854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.056 [2024-10-13 04:10:10.962863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:19:18.056 [2024-10-13 04:10:10.962871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.962901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.056 [2024-10-13 04:10:10.962909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:18.056 [2024-10-13 04:10:10.962917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:18.056 [2024-10-13 04:10:10.962924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.962942] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:18.056 [2024-10-13 04:10:10.962958] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:18.056 [2024-10-13 04:10:10.962991] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:18.056 [2024-10-13 04:10:10.963008] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:18.056 [2024-10-13 04:10:10.963108] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:18.056 [2024-10-13 04:10:10.963119] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:18.056 [2024-10-13 04:10:10.963129] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:18.056 [2024-10-13 04:10:10.963138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:18.056 [2024-10-13 04:10:10.963147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:18.056 [2024-10-13 04:10:10.963154] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:18.056 [2024-10-13 04:10:10.963162] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:18.056 [2024-10-13 04:10:10.963169] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:18.056 [2024-10-13 04:10:10.963176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:18.056 [2024-10-13 04:10:10.963183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.056 [2024-10-13 04:10:10.963193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:18.056 [2024-10-13 04:10:10.963200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:19:18.056 [2024-10-13 04:10:10.963207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.963288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.056 [2024-10-13 04:10:10.963296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:18.056 [2024-10-13 04:10:10.963303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:18.056 [2024-10-13 04:10:10.963310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.056 [2024-10-13 04:10:10.963410] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:18.056 [2024-10-13 04:10:10.963419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:18.056 [2024-10-13 04:10:10.963429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.056 [2024-10-13 04:10:10.963436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.056 [2024-10-13 04:10:10.963444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:18.056 [2024-10-13 04:10:10.963450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:18.056 [2024-10-13 04:10:10.963457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:18.056 [2024-10-13 04:10:10.963464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:18.056 [2024-10-13 04:10:10.963471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:18.056 [2024-10-13 04:10:10.963478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.056 [2024-10-13 04:10:10.963484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:18.056 [2024-10-13 04:10:10.963491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:18.056 [2024-10-13 04:10:10.963497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.056 [2024-10-13 04:10:10.963503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:18.056 [2024-10-13 04:10:10.963510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:18.056 [2024-10-13 04:10:10.963521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.056 [2024-10-13 04:10:10.963528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:18.056 [2024-10-13 04:10:10.963534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:18.056 [2024-10-13 04:10:10.963540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.056 [2024-10-13 04:10:10.963547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:18.056 [2024-10-13 04:10:10.963553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:18.056 [2024-10-13 04:10:10.963559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.056 [2024-10-13 04:10:10.963565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:18.056 [2024-10-13 04:10:10.963572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:18.056 [2024-10-13 04:10:10.963579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.056 [2024-10-13 04:10:10.963585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:18.056 [2024-10-13 04:10:10.963591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:18.057 [2024-10-13 04:10:10.963597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.057 [2024-10-13 04:10:10.963603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:18.057 [2024-10-13 04:10:10.963610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:18.057 [2024-10-13 04:10:10.963636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.057 [2024-10-13 04:10:10.963644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:18.057 [2024-10-13 04:10:10.963650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:18.057 [2024-10-13 04:10:10.963656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.057 [2024-10-13 04:10:10.963663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:18.057 [2024-10-13 04:10:10.963670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:18.057 [2024-10-13 04:10:10.963676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.057 [2024-10-13 04:10:10.963682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:18.057 [2024-10-13 04:10:10.963689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:18.057 [2024-10-13 04:10:10.963695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.057 [2024-10-13 04:10:10.963701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:18.057 [2024-10-13 04:10:10.963708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:18.057 [2024-10-13 04:10:10.963715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.057 [2024-10-13 04:10:10.963721] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:18.057 [2024-10-13 04:10:10.963729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:18.057 [2024-10-13 04:10:10.963736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.057 [2024-10-13 04:10:10.963743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.057 [2024-10-13 04:10:10.963750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:18.057 [2024-10-13 04:10:10.963756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:18.057 [2024-10-13 04:10:10.963762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:18.057 [2024-10-13 04:10:10.963769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:18.057 [2024-10-13 04:10:10.963775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:18.057 [2024-10-13 04:10:10.963781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:18.057 [2024-10-13 04:10:10.963789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:18.057 [2024-10-13 04:10:10.963798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.057 [2024-10-13 04:10:10.963805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:18.057 [2024-10-13 04:10:10.963813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:18.057 [2024-10-13 04:10:10.963819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:18.057 [2024-10-13 04:10:10.963826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:18.057 [2024-10-13 04:10:10.963833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:18.057 [2024-10-13 04:10:10.963848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:18.057 [2024-10-13 04:10:10.963855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:18.057 [2024-10-13 04:10:10.963862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:18.057 [2024-10-13 04:10:10.963870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:18.057 [2024-10-13 04:10:10.963877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:18.057 [2024-10-13 04:10:10.963884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:18.057 [2024-10-13 04:10:10.963891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:18.057 [2024-10-13 04:10:10.963898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:18.057 [2024-10-13 04:10:10.963905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:18.057 [2024-10-13 04:10:10.963911] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:18.057 [2024-10-13 04:10:10.963919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.057 [2024-10-13 04:10:10.963929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:18.057 [2024-10-13 04:10:10.963936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:18.057 [2024-10-13 04:10:10.963943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:18.057 [2024-10-13 04:10:10.963950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:18.057 [2024-10-13 04:10:10.963957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:10.963964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:18.057 [2024-10-13 04:10:10.963971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:19:18.057 [2024-10-13 04:10:10.963978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:10.989139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:10.989279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.057 [2024-10-13 04:10:10.989294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.107 ms 00:19:18.057 [2024-10-13 04:10:10.989303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:10.989381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:10.989393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:18.057 [2024-10-13 04:10:10.989400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:18.057 [2024-10-13 04:10:10.989408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.034106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.034145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.057 [2024-10-13 04:10:11.034157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.652 ms 00:19:18.057 [2024-10-13 04:10:11.034165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.034204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.034214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.057 [2024-10-13 04:10:11.034222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:18.057 [2024-10-13 04:10:11.034230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.034575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.034589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.057 [2024-10-13 04:10:11.034598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:19:18.057 [2024-10-13 04:10:11.034605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.034748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.034758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.057 [2024-10-13 04:10:11.034766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:18.057 [2024-10-13 04:10:11.034774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.047457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.047591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.057 [2024-10-13 04:10:11.047607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.662 ms 00:19:18.057 [2024-10-13 04:10:11.047644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.059807] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:18.057 [2024-10-13 04:10:11.059837] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:18.057 [2024-10-13 04:10:11.059849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.059857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:18.057 [2024-10-13 04:10:11.059865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.113 ms 00:19:18.057 [2024-10-13 04:10:11.059871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.083761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.083805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:18.057 [2024-10-13 04:10:11.083819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.856 ms 00:19:18.057 [2024-10-13 04:10:11.083827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.095068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.095103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:18.057 [2024-10-13 04:10:11.095113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.207 ms 00:19:18.057 [2024-10-13 04:10:11.095120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.106256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.106377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:18.057 [2024-10-13 04:10:11.106392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.106 ms 00:19:18.057 [2024-10-13 04:10:11.106399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.107002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.107020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:18.057 [2024-10-13 04:10:11.107029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:19:18.057 [2024-10-13 04:10:11.107035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.161284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.161486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:18.057 [2024-10-13 04:10:11.161505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.229 ms 00:19:18.057 [2024-10-13 04:10:11.161518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.057 [2024-10-13 04:10:11.172049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:18.057 [2024-10-13 04:10:11.174426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.057 [2024-10-13 04:10:11.174458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:18.058 [2024-10-13 04:10:11.174470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.615 ms 00:19:18.058 [2024-10-13 04:10:11.174477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.058 [2024-10-13 04:10:11.174579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.058 [2024-10-13 04:10:11.174591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:18.058 [2024-10-13 04:10:11.174600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:18.058 [2024-10-13 04:10:11.174608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.058 [2024-10-13 04:10:11.175956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.058 [2024-10-13 04:10:11.175987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:18.058 [2024-10-13 04:10:11.175997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:19:18.058 [2024-10-13 04:10:11.176006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.058 [2024-10-13 04:10:11.176031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.058 [2024-10-13 04:10:11.176039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:18.058 [2024-10-13 04:10:11.176047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:18.058 [2024-10-13 04:10:11.176054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.058 [2024-10-13 04:10:11.176087] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:18.058 [2024-10-13 04:10:11.176096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.058 [2024-10-13 04:10:11.176106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:18.058 [2024-10-13 04:10:11.176114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:18.058 [2024-10-13 04:10:11.176122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.058 [2024-10-13 04:10:11.198917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.058 [2024-10-13 04:10:11.199041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:18.058 [2024-10-13 04:10:11.199057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.779 ms 00:19:18.058 [2024-10-13 04:10:11.199065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.058 [2024-10-13 04:10:11.199147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.058 [2024-10-13 04:10:11.199157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:18.058 [2024-10-13 04:10:11.199165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:18.058 [2024-10-13 04:10:11.199172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.058 [2024-10-13 04:10:11.200102] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.351 ms, result 0 00:19:19.431  [2024-10-13T04:10:13.524Z] Copying: 43/1024 [MB] (43 MBps) [2024-10-13T04:10:14.458Z] Copying: 93/1024 [MB] (49 MBps) [2024-10-13T04:10:15.392Z] Copying: 144/1024 [MB] (50 MBps) [2024-10-13T04:10:16.766Z] Copying: 195/1024 [MB] (51 MBps) [2024-10-13T04:10:17.699Z] Copying: 246/1024 [MB] (50 MBps) [2024-10-13T04:10:18.641Z] Copying: 295/1024 [MB] (48 MBps) [2024-10-13T04:10:19.573Z] Copying: 343/1024 [MB] (48 MBps) [2024-10-13T04:10:20.507Z] Copying: 392/1024 [MB] (48 MBps) [2024-10-13T04:10:21.441Z] Copying: 439/1024 [MB] (46 MBps) [2024-10-13T04:10:22.816Z] Copying: 488/1024 [MB] (49 MBps) [2024-10-13T04:10:23.388Z] Copying: 538/1024 [MB] (49 MBps) [2024-10-13T04:10:24.762Z] Copying: 587/1024 [MB] (49 MBps) [2024-10-13T04:10:25.697Z] Copying: 638/1024 [MB] (50 MBps) [2024-10-13T04:10:26.632Z] Copying: 686/1024 [MB] (48 MBps) [2024-10-13T04:10:27.566Z] Copying: 734/1024 [MB] (47 MBps) [2024-10-13T04:10:28.498Z] Copying: 782/1024 [MB] (48 MBps) [2024-10-13T04:10:29.430Z] Copying: 831/1024 [MB] (48 MBps) [2024-10-13T04:10:30.803Z] Copying: 881/1024 [MB] (50 MBps) [2024-10-13T04:10:31.736Z] Copying: 930/1024 [MB] (48 MBps) [2024-10-13T04:10:32.669Z] Copying: 976/1024 [MB] (46 MBps) [2024-10-13T04:10:32.669Z] Copying: 1024/1024 [MB] (average 48 MBps)[2024-10-13 04:10:32.617734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.509 [2024-10-13 04:10:32.617998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:39.509 [2024-10-13 04:10:32.618080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:39.509 [2024-10-13 04:10:32.618112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.509 [2024-10-13 04:10:32.618162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.509 [2024-10-13 04:10:32.621535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.509 [2024-10-13 04:10:32.621686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:39.509 [2024-10-13 04:10:32.621759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.263 ms 00:19:39.509 [2024-10-13 04:10:32.621826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.509 [2024-10-13 04:10:32.622120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.509 [2024-10-13 04:10:32.622159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:39.509 [2024-10-13 04:10:32.622231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:19:39.509 [2024-10-13 04:10:32.622259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.509 [2024-10-13 04:10:32.627474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.509 [2024-10-13 04:10:32.628441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:39.510 [2024-10-13 04:10:32.628458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:19:39.510 [2024-10-13 04:10:32.628468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.510 [2024-10-13 04:10:32.634724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.510 [2024-10-13 04:10:32.634847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:39.510 [2024-10-13 04:10:32.634903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.220 ms 00:19:39.510 [2024-10-13 04:10:32.634924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.510 [2024-10-13 04:10:32.658672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.510 [2024-10-13 04:10:32.658796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:39.510 [2024-10-13 04:10:32.658844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.540 ms 00:19:39.510 [2024-10-13 04:10:32.658865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.768 [2024-10-13 04:10:32.672307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.768 [2024-10-13 04:10:32.672413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:39.768 [2024-10-13 04:10:32.672432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.404 ms 00:19:39.768 [2024-10-13 04:10:32.672440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.768 [2024-10-13 04:10:32.728847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.768 [2024-10-13 04:10:32.728893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:39.768 [2024-10-13 04:10:32.728904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.375 ms 00:19:39.768 [2024-10-13 04:10:32.728912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.768 [2024-10-13 04:10:32.752253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.768 [2024-10-13 04:10:32.752283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:39.768 [2024-10-13 04:10:32.752293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.328 ms 00:19:39.768 [2024-10-13 04:10:32.752300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.768 [2024-10-13 04:10:32.775083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.768 [2024-10-13 04:10:32.775109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:39.768 [2024-10-13 04:10:32.775126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.752 ms 00:19:39.768 [2024-10-13 04:10:32.775133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.768 [2024-10-13 04:10:32.797383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.768 [2024-10-13 04:10:32.797412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:39.768 [2024-10-13 04:10:32.797422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.219 ms 00:19:39.768 [2024-10-13 04:10:32.797429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.768 [2024-10-13 04:10:32.819335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.768 [2024-10-13 04:10:32.819363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:39.768 [2024-10-13 04:10:32.819373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.856 ms 00:19:39.768 [2024-10-13 04:10:32.819381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.768 [2024-10-13 04:10:32.819410] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:39.768 [2024-10-13 04:10:32.819423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:19:39.768 [2024-10-13 04:10:32.819432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:39.768 [2024-10-13 04:10:32.819538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.819997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:39.769 [2024-10-13 04:10:32.820202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:39.769 [2024-10-13 04:10:32.820209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a628b0-e563-49b5-b153-de474dcf74ca 00:19:39.769 [2024-10-13 04:10:32.820217] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:19:39.769 [2024-10-13 04:10:32.820224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11968 00:19:39.769 [2024-10-13 04:10:32.820231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11008 00:19:39.769 [2024-10-13 04:10:32.820239] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0872 00:19:39.769 [2024-10-13 04:10:32.820246] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:39.769 [2024-10-13 04:10:32.820254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:39.769 [2024-10-13 04:10:32.820261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:39.769 [2024-10-13 04:10:32.820273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:39.769 [2024-10-13 04:10:32.820280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:39.769 [2024-10-13 04:10:32.820287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.769 [2024-10-13 04:10:32.820297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:39.769 [2024-10-13 04:10:32.820305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:19:39.769 [2024-10-13 04:10:32.820312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.769 [2024-10-13 04:10:32.832354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.769 [2024-10-13 04:10:32.832383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:39.769 [2024-10-13 04:10:32.832397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.014 ms 00:19:39.769 [2024-10-13 04:10:32.832405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.769 [2024-10-13 04:10:32.832754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.769 [2024-10-13 04:10:32.832768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:39.769 [2024-10-13 04:10:32.832777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:19:39.769 [2024-10-13 04:10:32.832784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.769 [2024-10-13 04:10:32.865025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.769 [2024-10-13 04:10:32.865054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.769 [2024-10-13 04:10:32.865063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.769 [2024-10-13 04:10:32.865075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.769 [2024-10-13 04:10:32.865130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.769 [2024-10-13 04:10:32.865138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.769 [2024-10-13 04:10:32.865145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.769 [2024-10-13 04:10:32.865153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.769 [2024-10-13 04:10:32.865202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.769 [2024-10-13 04:10:32.865212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.769 [2024-10-13 04:10:32.865219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.769 [2024-10-13 04:10:32.865226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.769 [2024-10-13 04:10:32.865243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:39.769 [2024-10-13 04:10:32.865251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.769 [2024-10-13 04:10:32.865258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:39.769 [2024-10-13 04:10:32.865265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:32.941933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:32.941976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.028 [2024-10-13 04:10:32.941986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:32.941998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:33.004281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.028 [2024-10-13 04:10:33.004291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:33.004298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:33.004365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.028 [2024-10-13 04:10:33.004372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:33.004380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:33.004425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.028 [2024-10-13 04:10:33.004433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:33.004441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:33.004531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.028 [2024-10-13 04:10:33.004538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:33.004546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:33.004583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.028 [2024-10-13 04:10:33.004590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:33.004597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:33.004657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.028 [2024-10-13 04:10:33.004665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:33.004672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.028 [2024-10-13 04:10:33.004721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.028 [2024-10-13 04:10:33.004729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.028 [2024-10-13 04:10:33.004736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.028 [2024-10-13 04:10:33.004838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.087 ms, result 0 00:19:40.593 00:19:40.594 00:19:40.594 04:10:33 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:43.124 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:43.124 Process with pid 74569 is not found 00:19:43.124 Remove shared memory files 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74569 00:19:43.124 04:10:35 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74569 ']' 00:19:43.124 04:10:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74569 00:19:43.124 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74569) - No such process 00:19:43.124 04:10:35 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74569 is not found' 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:43.124 04:10:35 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:19:43.124 00:19:43.124 real 2m1.835s 00:19:43.124 user 1m52.569s 00:19:43.124 sys 0m10.741s 00:19:43.124 04:10:35 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:43.124 04:10:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:43.124 ************************************ 00:19:43.124 END TEST ftl_restore 00:19:43.124 ************************************ 00:19:43.124 04:10:35 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:43.124 04:10:35 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:43.124 04:10:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:43.124 04:10:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:43.124 ************************************ 00:19:43.124 START TEST ftl_dirty_shutdown 00:19:43.124 ************************************ 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:19:43.124 * Looking for test storage... 00:19:43.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.124 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.125 --rc genhtml_branch_coverage=1 00:19:43.125 --rc genhtml_function_coverage=1 00:19:43.125 --rc genhtml_legend=1 00:19:43.125 --rc geninfo_all_blocks=1 00:19:43.125 --rc geninfo_unexecuted_blocks=1 00:19:43.125 00:19:43.125 ' 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.125 --rc genhtml_branch_coverage=1 00:19:43.125 --rc genhtml_function_coverage=1 00:19:43.125 --rc genhtml_legend=1 00:19:43.125 --rc geninfo_all_blocks=1 00:19:43.125 --rc geninfo_unexecuted_blocks=1 00:19:43.125 00:19:43.125 ' 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.125 --rc genhtml_branch_coverage=1 00:19:43.125 --rc genhtml_function_coverage=1 00:19:43.125 --rc genhtml_legend=1 00:19:43.125 --rc geninfo_all_blocks=1 00:19:43.125 --rc geninfo_unexecuted_blocks=1 00:19:43.125 00:19:43.125 ' 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:43.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.125 --rc genhtml_branch_coverage=1 00:19:43.125 --rc genhtml_function_coverage=1 00:19:43.125 --rc genhtml_legend=1 00:19:43.125 --rc geninfo_all_blocks=1 00:19:43.125 --rc geninfo_unexecuted_blocks=1 00:19:43.125 00:19:43.125 ' 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:43.125 04:10:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=75909 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75909 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 75909 ']' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.125 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:43.125 [2024-10-13 04:10:36.091404] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:19:43.125 [2024-10-13 04:10:36.091522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75909 ] 00:19:43.125 [2024-10-13 04:10:36.240825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.384 [2024-10-13 04:10:36.339289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:19:43.954 04:10:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:44.212 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:44.212 { 00:19:44.212 "name": "nvme0n1", 00:19:44.212 "aliases": [ 00:19:44.212 "5ed52d1e-6615-4de3-a77f-799e0386af1a" 00:19:44.212 ], 00:19:44.212 "product_name": "NVMe disk", 00:19:44.212 "block_size": 4096, 00:19:44.212 "num_blocks": 1310720, 00:19:44.212 "uuid": "5ed52d1e-6615-4de3-a77f-799e0386af1a", 00:19:44.212 "numa_id": -1, 00:19:44.212 "assigned_rate_limits": { 00:19:44.212 "rw_ios_per_sec": 0, 00:19:44.212 "rw_mbytes_per_sec": 0, 00:19:44.212 "r_mbytes_per_sec": 0, 00:19:44.212 "w_mbytes_per_sec": 0 00:19:44.212 }, 00:19:44.212 "claimed": true, 00:19:44.212 "claim_type": "read_many_write_one", 00:19:44.212 "zoned": false, 00:19:44.212 "supported_io_types": { 00:19:44.212 "read": true, 00:19:44.212 "write": true, 00:19:44.212 "unmap": true, 00:19:44.212 "flush": true, 00:19:44.212 "reset": true, 00:19:44.212 "nvme_admin": true, 00:19:44.212 "nvme_io": true, 00:19:44.212 "nvme_io_md": false, 00:19:44.212 "write_zeroes": true, 00:19:44.212 "zcopy": false, 00:19:44.212 "get_zone_info": false, 00:19:44.212 "zone_management": false, 00:19:44.212 "zone_append": false, 00:19:44.212 "compare": true, 00:19:44.212 "compare_and_write": false, 00:19:44.212 "abort": true, 00:19:44.212 "seek_hole": false, 00:19:44.212 "seek_data": false, 00:19:44.212 "copy": true, 00:19:44.212 "nvme_iov_md": false 00:19:44.212 }, 00:19:44.212 "driver_specific": { 00:19:44.212 "nvme": [ 00:19:44.212 { 00:19:44.212 "pci_address": "0000:00:11.0", 00:19:44.212 "trid": { 00:19:44.212 "trtype": "PCIe", 00:19:44.212 "traddr": "0000:00:11.0" 00:19:44.212 }, 00:19:44.212 "ctrlr_data": { 00:19:44.212 "cntlid": 0, 00:19:44.212 "vendor_id": "0x1b36", 00:19:44.212 "model_number": "QEMU NVMe Ctrl", 00:19:44.212 "serial_number": "12341", 00:19:44.212 "firmware_revision": "8.0.0", 00:19:44.212 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:44.212 "oacs": { 00:19:44.212 "security": 0, 00:19:44.212 "format": 1, 00:19:44.212 "firmware": 0, 00:19:44.212 "ns_manage": 1 00:19:44.212 }, 00:19:44.212 "multi_ctrlr": false, 00:19:44.212 "ana_reporting": false 00:19:44.212 }, 00:19:44.212 "vs": { 00:19:44.213 "nvme_version": "1.4" 00:19:44.213 }, 00:19:44.213 "ns_data": { 00:19:44.213 "id": 1, 00:19:44.213 "can_share": false 00:19:44.213 } 00:19:44.213 } 00:19:44.213 ], 00:19:44.213 "mp_policy": "active_passive" 00:19:44.213 } 00:19:44.213 } 00:19:44.213 ]' 00:19:44.213 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b5d31cd4-58a1-4fcb-8e2b-0032a1565c9a 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:19:44.471 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5d31cd4-58a1-4fcb-8e2b-0032a1565c9a 00:19:44.730 04:10:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:44.988 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ff699a21-bcf2-4700-bd4a-a69c499fd4fa 00:19:44.988 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff699a21-bcf2-4700-bd4a-a69c499fd4fa 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:45.245 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.502 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:45.502 { 00:19:45.502 "name": "074884ce-24a8-4d1f-b2f8-e377517741c4", 00:19:45.502 "aliases": [ 00:19:45.502 "lvs/nvme0n1p0" 00:19:45.502 ], 00:19:45.502 "product_name": "Logical Volume", 00:19:45.502 "block_size": 4096, 00:19:45.502 "num_blocks": 26476544, 00:19:45.502 "uuid": "074884ce-24a8-4d1f-b2f8-e377517741c4", 00:19:45.502 "assigned_rate_limits": { 00:19:45.502 "rw_ios_per_sec": 0, 00:19:45.502 "rw_mbytes_per_sec": 0, 00:19:45.502 "r_mbytes_per_sec": 0, 00:19:45.502 "w_mbytes_per_sec": 0 00:19:45.502 }, 00:19:45.502 "claimed": false, 00:19:45.502 "zoned": false, 00:19:45.502 "supported_io_types": { 00:19:45.502 "read": true, 00:19:45.502 "write": true, 00:19:45.502 "unmap": true, 00:19:45.502 "flush": false, 00:19:45.502 "reset": true, 00:19:45.502 "nvme_admin": false, 00:19:45.502 "nvme_io": false, 00:19:45.502 "nvme_io_md": false, 00:19:45.502 "write_zeroes": true, 00:19:45.502 "zcopy": false, 00:19:45.502 "get_zone_info": false, 00:19:45.502 "zone_management": false, 00:19:45.502 "zone_append": false, 00:19:45.502 "compare": false, 00:19:45.502 "compare_and_write": false, 00:19:45.502 "abort": false, 00:19:45.502 "seek_hole": true, 00:19:45.502 "seek_data": true, 00:19:45.502 "copy": false, 00:19:45.502 "nvme_iov_md": false 00:19:45.502 }, 00:19:45.502 "driver_specific": { 00:19:45.502 "lvol": { 00:19:45.502 "lvol_store_uuid": "ff699a21-bcf2-4700-bd4a-a69c499fd4fa", 00:19:45.502 "base_bdev": "nvme0n1", 00:19:45.502 "thin_provision": true, 00:19:45.502 "num_allocated_clusters": 0, 00:19:45.502 "snapshot": false, 00:19:45.502 "clone": false, 00:19:45.502 "esnap_clone": false 00:19:45.502 } 00:19:45.502 } 00:19:45.502 } 00:19:45.502 ]' 00:19:45.502 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:45.502 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:45.503 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:45.503 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:45.503 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:45.503 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:45.503 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:19:45.503 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:19:45.503 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:45.760 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:46.018 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:46.018 { 00:19:46.018 "name": "074884ce-24a8-4d1f-b2f8-e377517741c4", 00:19:46.018 "aliases": [ 00:19:46.018 "lvs/nvme0n1p0" 00:19:46.018 ], 00:19:46.018 "product_name": "Logical Volume", 00:19:46.018 "block_size": 4096, 00:19:46.018 "num_blocks": 26476544, 00:19:46.018 "uuid": "074884ce-24a8-4d1f-b2f8-e377517741c4", 00:19:46.018 "assigned_rate_limits": { 00:19:46.018 "rw_ios_per_sec": 0, 00:19:46.018 "rw_mbytes_per_sec": 0, 00:19:46.018 "r_mbytes_per_sec": 0, 00:19:46.018 "w_mbytes_per_sec": 0 00:19:46.018 }, 00:19:46.018 "claimed": false, 00:19:46.018 "zoned": false, 00:19:46.018 "supported_io_types": { 00:19:46.018 "read": true, 00:19:46.018 "write": true, 00:19:46.018 "unmap": true, 00:19:46.018 "flush": false, 00:19:46.018 "reset": true, 00:19:46.018 "nvme_admin": false, 00:19:46.018 "nvme_io": false, 00:19:46.018 "nvme_io_md": false, 00:19:46.018 "write_zeroes": true, 00:19:46.018 "zcopy": false, 00:19:46.018 "get_zone_info": false, 00:19:46.018 "zone_management": false, 00:19:46.018 "zone_append": false, 00:19:46.018 "compare": false, 00:19:46.018 "compare_and_write": false, 00:19:46.018 "abort": false, 00:19:46.018 "seek_hole": true, 00:19:46.018 "seek_data": true, 00:19:46.018 "copy": false, 00:19:46.018 "nvme_iov_md": false 00:19:46.018 }, 00:19:46.018 "driver_specific": { 00:19:46.018 "lvol": { 00:19:46.018 "lvol_store_uuid": "ff699a21-bcf2-4700-bd4a-a69c499fd4fa", 00:19:46.019 "base_bdev": "nvme0n1", 00:19:46.019 "thin_provision": true, 00:19:46.019 "num_allocated_clusters": 0, 00:19:46.019 "snapshot": false, 00:19:46.019 "clone": false, 00:19:46.019 "esnap_clone": false 00:19:46.019 } 00:19:46.019 } 00:19:46.019 } 00:19:46.019 ]' 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:19:46.019 04:10:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 074884ce-24a8-4d1f-b2f8-e377517741c4 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:46.277 { 00:19:46.277 "name": "074884ce-24a8-4d1f-b2f8-e377517741c4", 00:19:46.277 "aliases": [ 00:19:46.277 "lvs/nvme0n1p0" 00:19:46.277 ], 00:19:46.277 "product_name": "Logical Volume", 00:19:46.277 "block_size": 4096, 00:19:46.277 "num_blocks": 26476544, 00:19:46.277 "uuid": "074884ce-24a8-4d1f-b2f8-e377517741c4", 00:19:46.277 "assigned_rate_limits": { 00:19:46.277 "rw_ios_per_sec": 0, 00:19:46.277 "rw_mbytes_per_sec": 0, 00:19:46.277 "r_mbytes_per_sec": 0, 00:19:46.277 "w_mbytes_per_sec": 0 00:19:46.277 }, 00:19:46.277 "claimed": false, 00:19:46.277 "zoned": false, 00:19:46.277 "supported_io_types": { 00:19:46.277 "read": true, 00:19:46.277 "write": true, 00:19:46.277 "unmap": true, 00:19:46.277 "flush": false, 00:19:46.277 "reset": true, 00:19:46.277 "nvme_admin": false, 00:19:46.277 "nvme_io": false, 00:19:46.277 "nvme_io_md": false, 00:19:46.277 "write_zeroes": true, 00:19:46.277 "zcopy": false, 00:19:46.277 "get_zone_info": false, 00:19:46.277 "zone_management": false, 00:19:46.277 "zone_append": false, 00:19:46.277 "compare": false, 00:19:46.277 "compare_and_write": false, 00:19:46.277 "abort": false, 00:19:46.277 "seek_hole": true, 00:19:46.277 "seek_data": true, 00:19:46.277 "copy": false, 00:19:46.277 "nvme_iov_md": false 00:19:46.277 }, 00:19:46.277 "driver_specific": { 00:19:46.277 "lvol": { 00:19:46.277 "lvol_store_uuid": "ff699a21-bcf2-4700-bd4a-a69c499fd4fa", 00:19:46.277 "base_bdev": "nvme0n1", 00:19:46.277 "thin_provision": true, 00:19:46.277 "num_allocated_clusters": 0, 00:19:46.277 "snapshot": false, 00:19:46.277 "clone": false, 00:19:46.277 "esnap_clone": false 00:19:46.277 } 00:19:46.277 } 00:19:46.277 } 00:19:46.277 ]' 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:19:46.277 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 074884ce-24a8-4d1f-b2f8-e377517741c4 --l2p_dram_limit 10' 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:46.537 04:10:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 074884ce-24a8-4d1f-b2f8-e377517741c4 --l2p_dram_limit 10 -c nvc0n1p0 00:19:46.537 [2024-10-13 04:10:39.647815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.647992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.537 [2024-10-13 04:10:39.648013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:46.537 [2024-10-13 04:10:39.648020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.648076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.648085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.537 [2024-10-13 04:10:39.648094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:46.537 [2024-10-13 04:10:39.648116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.648145] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.537 [2024-10-13 04:10:39.648754] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.537 [2024-10-13 04:10:39.648770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.648776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.537 [2024-10-13 04:10:39.648784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:19:46.537 [2024-10-13 04:10:39.648790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.648844] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ed1e26c3-d8e3-46aa-a144-99192ed32845 00:19:46.537 [2024-10-13 04:10:39.649847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.649878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:46.537 [2024-10-13 04:10:39.649886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:46.537 [2024-10-13 04:10:39.649896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.654783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.654892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.537 [2024-10-13 04:10:39.654904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:19:46.537 [2024-10-13 04:10:39.654912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.654982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.654991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.537 [2024-10-13 04:10:39.654997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:46.537 [2024-10-13 04:10:39.655006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.655038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.655047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.537 [2024-10-13 04:10:39.655053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:46.537 [2024-10-13 04:10:39.655060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.655076] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.537 [2024-10-13 04:10:39.657959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.658051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.537 [2024-10-13 04:10:39.658065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:19:46.537 [2024-10-13 04:10:39.658075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.658103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.658110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.537 [2024-10-13 04:10:39.658118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:46.537 [2024-10-13 04:10:39.658124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.658149] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:46.537 [2024-10-13 04:10:39.658257] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.537 [2024-10-13 04:10:39.658269] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.537 [2024-10-13 04:10:39.658278] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.537 [2024-10-13 04:10:39.658287] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658293] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:46.537 [2024-10-13 04:10:39.658306] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.537 [2024-10-13 04:10:39.658313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.537 [2024-10-13 04:10:39.658319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.537 [2024-10-13 04:10:39.658327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.658334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.537 [2024-10-13 04:10:39.658342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:19:46.537 [2024-10-13 04:10:39.658351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.658417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.537 [2024-10-13 04:10:39.658424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.537 [2024-10-13 04:10:39.658431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:46.537 [2024-10-13 04:10:39.658437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.537 [2024-10-13 04:10:39.658512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.537 [2024-10-13 04:10:39.658519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.537 [2024-10-13 04:10:39.658529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.537 [2024-10-13 04:10:39.658546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.537 [2024-10-13 04:10:39.658565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.537 [2024-10-13 04:10:39.658577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.537 [2024-10-13 04:10:39.658582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:46.537 [2024-10-13 04:10:39.658589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.537 [2024-10-13 04:10:39.658594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.537 [2024-10-13 04:10:39.658601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:46.537 [2024-10-13 04:10:39.658607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.537 [2024-10-13 04:10:39.658636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.537 [2024-10-13 04:10:39.658655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.537 [2024-10-13 04:10:39.658672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.537 [2024-10-13 04:10:39.658695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.537 [2024-10-13 04:10:39.658711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.537 [2024-10-13 04:10:39.658722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.537 [2024-10-13 04:10:39.658730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:46.537 [2024-10-13 04:10:39.658735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.537 [2024-10-13 04:10:39.658741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.537 [2024-10-13 04:10:39.658746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:46.537 [2024-10-13 04:10:39.658752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.537 [2024-10-13 04:10:39.658757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.537 [2024-10-13 04:10:39.658763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:46.538 [2024-10-13 04:10:39.658768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.538 [2024-10-13 04:10:39.658774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.538 [2024-10-13 04:10:39.658779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:46.538 [2024-10-13 04:10:39.658785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.538 [2024-10-13 04:10:39.658789] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.538 [2024-10-13 04:10:39.658796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.538 [2024-10-13 04:10:39.658802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.538 [2024-10-13 04:10:39.658808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.538 [2024-10-13 04:10:39.658815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.538 [2024-10-13 04:10:39.658823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.538 [2024-10-13 04:10:39.658828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.538 [2024-10-13 04:10:39.658835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.538 [2024-10-13 04:10:39.658840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.538 [2024-10-13 04:10:39.658846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.538 [2024-10-13 04:10:39.658853] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.538 [2024-10-13 04:10:39.658862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.538 [2024-10-13 04:10:39.658868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:46.538 [2024-10-13 04:10:39.658875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:46.538 [2024-10-13 04:10:39.658880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:46.538 [2024-10-13 04:10:39.658886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:46.538 [2024-10-13 04:10:39.658891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:46.538 [2024-10-13 04:10:39.658898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:46.538 [2024-10-13 04:10:39.658903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:46.538 [2024-10-13 04:10:39.658910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:46.538 [2024-10-13 04:10:39.658915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:46.538 [2024-10-13 04:10:39.658923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:46.538 [2024-10-13 04:10:39.658929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:46.538 [2024-10-13 04:10:39.658935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:46.538 [2024-10-13 04:10:39.658940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:46.538 [2024-10-13 04:10:39.658947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:46.538 [2024-10-13 04:10:39.658952] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.538 [2024-10-13 04:10:39.658959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.538 [2024-10-13 04:10:39.658968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.538 [2024-10-13 04:10:39.658974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.538 [2024-10-13 04:10:39.658979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.538 [2024-10-13 04:10:39.658987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.538 [2024-10-13 04:10:39.658992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.538 [2024-10-13 04:10:39.659000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.538 [2024-10-13 04:10:39.659005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:19:46.538 [2024-10-13 04:10:39.659012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.538 [2024-10-13 04:10:39.659054] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:46.538 [2024-10-13 04:10:39.659064] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:49.069 [2024-10-13 04:10:41.689242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.689305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:49.069 [2024-10-13 04:10:41.689320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2030.180 ms 00:19:49.069 [2024-10-13 04:10:41.689331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.714889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.714936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.069 [2024-10-13 04:10:41.714949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.358 ms 00:19:49.069 [2024-10-13 04:10:41.714958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.715076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.715087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.069 [2024-10-13 04:10:41.715096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:49.069 [2024-10-13 04:10:41.715106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.745181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.745219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.069 [2024-10-13 04:10:41.745230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.042 ms 00:19:49.069 [2024-10-13 04:10:41.745239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.745267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.745278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.069 [2024-10-13 04:10:41.745286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:49.069 [2024-10-13 04:10:41.745298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.745654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.745672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.069 [2024-10-13 04:10:41.745681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:19:49.069 [2024-10-13 04:10:41.745690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.745790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.745800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.069 [2024-10-13 04:10:41.745808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:49.069 [2024-10-13 04:10:41.745819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.759689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.759858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.069 [2024-10-13 04:10:41.759874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.851 ms 00:19:49.069 [2024-10-13 04:10:41.759885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.771012] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:49.069 [2024-10-13 04:10:41.773767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.773795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.069 [2024-10-13 04:10:41.773808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.811 ms 00:19:49.069 [2024-10-13 04:10:41.773816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.836860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.836915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:49.069 [2024-10-13 04:10:41.836932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.016 ms 00:19:49.069 [2024-10-13 04:10:41.836941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.837119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.837131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:49.069 [2024-10-13 04:10:41.837144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:49.069 [2024-10-13 04:10:41.837154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.859944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.859977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:49.069 [2024-10-13 04:10:41.859991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.746 ms 00:19:49.069 [2024-10-13 04:10:41.859999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.881999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.882028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:49.069 [2024-10-13 04:10:41.882041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.963 ms 00:19:49.069 [2024-10-13 04:10:41.882048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.882597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.882621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.069 [2024-10-13 04:10:41.882632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:19:49.069 [2024-10-13 04:10:41.882639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.947385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.947550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:49.069 [2024-10-13 04:10:41.947573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.700 ms 00:19:49.069 [2024-10-13 04:10:41.947581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.971265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.971308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:49.069 [2024-10-13 04:10:41.971323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.599 ms 00:19:49.069 [2024-10-13 04:10:41.971330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:41.994323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:41.994448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:49.069 [2024-10-13 04:10:41.994468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.956 ms 00:19:49.069 [2024-10-13 04:10:41.994475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:42.017493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:42.017629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.069 [2024-10-13 04:10:42.017648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.983 ms 00:19:49.069 [2024-10-13 04:10:42.017655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:42.017692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:42.017701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.069 [2024-10-13 04:10:42.017713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:49.069 [2024-10-13 04:10:42.017720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:42.017794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.069 [2024-10-13 04:10:42.017804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.069 [2024-10-13 04:10:42.017813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:49.069 [2024-10-13 04:10:42.017820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.069 [2024-10-13 04:10:42.018609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2370.387 ms, result 0 00:19:49.069 { 00:19:49.069 "name": "ftl0", 00:19:49.069 "uuid": "ed1e26c3-d8e3-46aa-a144-99192ed32845" 00:19:49.069 } 00:19:49.069 04:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:19:49.069 04:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:19:49.328 /dev/nbd0 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:19:49.328 1+0 records in 00:19:49.328 1+0 records out 00:19:49.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287197 s, 14.3 MB/s 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:19:49.328 04:10:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:19:49.587 [2024-10-13 04:10:42.535959] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:19:49.587 [2024-10-13 04:10:42.536076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76040 ] 00:19:49.587 [2024-10-13 04:10:42.685013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.846 [2024-10-13 04:10:42.784414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.223  [2024-10-13T04:10:45.319Z] Copying: 196/1024 [MB] (196 MBps) [2024-10-13T04:10:46.257Z] Copying: 409/1024 [MB] (212 MBps) [2024-10-13T04:10:47.192Z] Copying: 668/1024 [MB] (258 MBps) [2024-10-13T04:10:47.451Z] Copying: 921/1024 [MB] (253 MBps) [2024-10-13T04:10:48.018Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:19:54.858 00:19:54.858 04:10:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:56.770 04:10:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:19:56.770 [2024-10-13 04:10:49.927008] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:19:56.770 [2024-10-13 04:10:49.927098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76119 ] 00:19:57.029 [2024-10-13 04:10:50.068713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.029 [2024-10-13 04:10:50.151633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.404  [2024-10-13T04:10:52.498Z] Copying: 35/1024 [MB] (35 MBps) [2024-10-13T04:10:53.433Z] Copying: 64/1024 [MB] (29 MBps) [2024-10-13T04:10:54.368Z] Copying: 90/1024 [MB] (25 MBps) [2024-10-13T04:10:55.741Z] Copying: 120/1024 [MB] (30 MBps) [2024-10-13T04:10:56.676Z] Copying: 148/1024 [MB] (28 MBps) [2024-10-13T04:10:57.610Z] Copying: 178/1024 [MB] (30 MBps) [2024-10-13T04:10:58.544Z] Copying: 214/1024 [MB] (35 MBps) [2024-10-13T04:10:59.478Z] Copying: 250/1024 [MB] (35 MBps) [2024-10-13T04:11:00.412Z] Copying: 284/1024 [MB] (34 MBps) [2024-10-13T04:11:01.345Z] Copying: 314/1024 [MB] (30 MBps) [2024-10-13T04:11:02.752Z] Copying: 345/1024 [MB] (30 MBps) [2024-10-13T04:11:03.686Z] Copying: 375/1024 [MB] (30 MBps) [2024-10-13T04:11:04.619Z] Copying: 407/1024 [MB] (31 MBps) [2024-10-13T04:11:05.553Z] Copying: 443/1024 [MB] (35 MBps) [2024-10-13T04:11:06.487Z] Copying: 474/1024 [MB] (31 MBps) [2024-10-13T04:11:07.420Z] Copying: 505/1024 [MB] (30 MBps) [2024-10-13T04:11:08.353Z] Copying: 537/1024 [MB] (32 MBps) [2024-10-13T04:11:09.726Z] Copying: 568/1024 [MB] (30 MBps) [2024-10-13T04:11:10.659Z] Copying: 604/1024 [MB] (36 MBps) [2024-10-13T04:11:11.592Z] Copying: 637/1024 [MB] (32 MBps) [2024-10-13T04:11:12.526Z] Copying: 668/1024 [MB] (30 MBps) [2024-10-13T04:11:13.493Z] Copying: 698/1024 [MB] (30 MBps) [2024-10-13T04:11:14.427Z] Copying: 732/1024 [MB] (34 MBps) [2024-10-13T04:11:15.361Z] Copying: 769/1024 [MB] (36 MBps) [2024-10-13T04:11:16.735Z] Copying: 805/1024 [MB] (36 MBps) [2024-10-13T04:11:17.670Z] Copying: 841/1024 [MB] (36 MBps) [2024-10-13T04:11:18.604Z] Copying: 877/1024 [MB] (36 MBps) [2024-10-13T04:11:19.539Z] Copying: 913/1024 [MB] (36 MBps) [2024-10-13T04:11:20.474Z] Copying: 945/1024 [MB] (31 MBps) [2024-10-13T04:11:21.409Z] Copying: 975/1024 [MB] (30 MBps) [2024-10-13T04:11:21.977Z] Copying: 1007/1024 [MB] (32 MBps) [2024-10-13T04:11:22.544Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:20:29.384 00:20:29.384 04:11:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:20:29.384 04:11:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:20:29.643 04:11:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:29.903 [2024-10-13 04:11:22.859406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.859453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:29.903 [2024-10-13 04:11:22.859464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:29.903 [2024-10-13 04:11:22.859472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.859490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.903 [2024-10-13 04:11:22.861621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.861653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:29.903 [2024-10-13 04:11:22.861662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:20:29.903 [2024-10-13 04:11:22.861668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.863426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.863456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:29.903 [2024-10-13 04:11:22.863464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.734 ms 00:20:29.903 [2024-10-13 04:11:22.863470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.877286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.877317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:29.903 [2024-10-13 04:11:22.877327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.799 ms 00:20:29.903 [2024-10-13 04:11:22.877337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.882162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.882185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:29.903 [2024-10-13 04:11:22.882195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.797 ms 00:20:29.903 [2024-10-13 04:11:22.882201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.900154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.900183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:29.903 [2024-10-13 04:11:22.900193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.896 ms 00:20:29.903 [2024-10-13 04:11:22.900199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.911761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.911790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:29.903 [2024-10-13 04:11:22.911801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.529 ms 00:20:29.903 [2024-10-13 04:11:22.911808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.911913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.911923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:29.903 [2024-10-13 04:11:22.911932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:29.903 [2024-10-13 04:11:22.911937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.929592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.929625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:29.903 [2024-10-13 04:11:22.929634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.639 ms 00:20:29.903 [2024-10-13 04:11:22.929640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.946428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.946465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:29.903 [2024-10-13 04:11:22.946475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.759 ms 00:20:29.903 [2024-10-13 04:11:22.946480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.903 [2024-10-13 04:11:22.963094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.903 [2024-10-13 04:11:22.963121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:29.903 [2024-10-13 04:11:22.963130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.582 ms 00:20:29.904 [2024-10-13 04:11:22.963135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.904 [2024-10-13 04:11:22.979750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.904 [2024-10-13 04:11:22.979775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:29.904 [2024-10-13 04:11:22.979784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.558 ms 00:20:29.904 [2024-10-13 04:11:22.979790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.904 [2024-10-13 04:11:22.979817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:29.904 [2024-10-13 04:11:22.979827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.979995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:29.904 [2024-10-13 04:11:22.980421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:29.905 [2024-10-13 04:11:22.980495] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:29.905 [2024-10-13 04:11:22.980503] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed1e26c3-d8e3-46aa-a144-99192ed32845 00:20:29.905 [2024-10-13 04:11:22.980508] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:29.905 [2024-10-13 04:11:22.980516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:29.905 [2024-10-13 04:11:22.980522] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:29.905 [2024-10-13 04:11:22.980529] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:29.905 [2024-10-13 04:11:22.980534] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:29.905 [2024-10-13 04:11:22.980541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:29.905 [2024-10-13 04:11:22.980548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:29.905 [2024-10-13 04:11:22.980554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:29.905 [2024-10-13 04:11:22.980561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:29.905 [2024-10-13 04:11:22.980568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.905 [2024-10-13 04:11:22.980573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:29.905 [2024-10-13 04:11:22.980580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:20:29.905 [2024-10-13 04:11:22.980586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.905 [2024-10-13 04:11:22.990033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.905 [2024-10-13 04:11:22.990057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.905 [2024-10-13 04:11:22.990065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.423 ms 00:20:29.905 [2024-10-13 04:11:22.990071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.905 [2024-10-13 04:11:22.990344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.905 [2024-10-13 04:11:22.990355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.905 [2024-10-13 04:11:22.990363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:20:29.905 [2024-10-13 04:11:22.990368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.905 [2024-10-13 04:11:23.022602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.905 [2024-10-13 04:11:23.022644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.905 [2024-10-13 04:11:23.022654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.905 [2024-10-13 04:11:23.022661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.905 [2024-10-13 04:11:23.022709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.905 [2024-10-13 04:11:23.022716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.905 [2024-10-13 04:11:23.022724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.905 [2024-10-13 04:11:23.022730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.905 [2024-10-13 04:11:23.022814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.905 [2024-10-13 04:11:23.022822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.905 [2024-10-13 04:11:23.022830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.905 [2024-10-13 04:11:23.022835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.905 [2024-10-13 04:11:23.022852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.905 [2024-10-13 04:11:23.022858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.905 [2024-10-13 04:11:23.022865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.905 [2024-10-13 04:11:23.022870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.081518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.081552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.164 [2024-10-13 04:11:23.081562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.081571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.129099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.164 [2024-10-13 04:11:23.129110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.129116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.129189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.164 [2024-10-13 04:11:23.129197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.129203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.129263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.164 [2024-10-13 04:11:23.129271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.129276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.129354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.164 [2024-10-13 04:11:23.129361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.129367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.129400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:30.164 [2024-10-13 04:11:23.129409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.129414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.129451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.164 [2024-10-13 04:11:23.129459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.129464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:30.164 [2024-10-13 04:11:23.129511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.164 [2024-10-13 04:11:23.129519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:30.164 [2024-10-13 04:11:23.129524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.164 [2024-10-13 04:11:23.129666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.193 ms, result 0 00:20:30.164 true 00:20:30.164 04:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 75909 00:20:30.164 04:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75909 00:20:30.164 04:11:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:20:30.164 [2024-10-13 04:11:23.203894] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:20:30.164 [2024-10-13 04:11:23.204011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76477 ] 00:20:30.423 [2024-10-13 04:11:23.349834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.423 [2024-10-13 04:11:23.429315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.799  [2024-10-13T04:11:25.892Z] Copying: 257/1024 [MB] (257 MBps) [2024-10-13T04:11:26.829Z] Copying: 510/1024 [MB] (252 MBps) [2024-10-13T04:11:27.765Z] Copying: 767/1024 [MB] (257 MBps) [2024-10-13T04:11:27.765Z] Copying: 1019/1024 [MB] (251 MBps) [2024-10-13T04:11:28.333Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:20:35.173 00:20:35.173 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75909 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:20:35.173 04:11:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.173 [2024-10-13 04:11:28.244210] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:20:35.173 [2024-10-13 04:11:28.244324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76534 ] 00:20:35.431 [2024-10-13 04:11:28.392766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.431 [2024-10-13 04:11:28.473174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.689 [2024-10-13 04:11:28.681174] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.689 [2024-10-13 04:11:28.681229] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.689 [2024-10-13 04:11:28.743728] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:35.690 [2024-10-13 04:11:28.743999] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:35.690 [2024-10-13 04:11:28.744466] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:35.950 [2024-10-13 04:11:28.918781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.918822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.950 [2024-10-13 04:11:28.918832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.950 [2024-10-13 04:11:28.918838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.918877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.918885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.950 [2024-10-13 04:11:28.918891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:35.950 [2024-10-13 04:11:28.918897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.918910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.950 [2024-10-13 04:11:28.919448] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.950 [2024-10-13 04:11:28.919466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.919472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.950 [2024-10-13 04:11:28.919478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:20:35.950 [2024-10-13 04:11:28.919484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.920432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:35.950 [2024-10-13 04:11:28.930137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.930166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:35.950 [2024-10-13 04:11:28.930177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.706 ms 00:20:35.950 [2024-10-13 04:11:28.930183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.930221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.930229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:35.950 [2024-10-13 04:11:28.930235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.950 [2024-10-13 04:11:28.930241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.934722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.934748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.950 [2024-10-13 04:11:28.934755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.445 ms 00:20:35.950 [2024-10-13 04:11:28.934762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.934814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.934821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.950 [2024-10-13 04:11:28.934828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:35.950 [2024-10-13 04:11:28.934833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.934868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.934875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.950 [2024-10-13 04:11:28.934884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.950 [2024-10-13 04:11:28.934890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.934903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.950 [2024-10-13 04:11:28.937605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.937634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.950 [2024-10-13 04:11:28.937641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:20:35.950 [2024-10-13 04:11:28.937647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.937672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.937678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.950 [2024-10-13 04:11:28.937685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:35.950 [2024-10-13 04:11:28.937691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.937704] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:35.950 [2024-10-13 04:11:28.937718] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:35.950 [2024-10-13 04:11:28.937746] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:35.950 [2024-10-13 04:11:28.937757] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:35.950 [2024-10-13 04:11:28.937835] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.950 [2024-10-13 04:11:28.937843] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.950 [2024-10-13 04:11:28.937851] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:35.950 [2024-10-13 04:11:28.937859] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.950 [2024-10-13 04:11:28.937865] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.950 [2024-10-13 04:11:28.937873] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:35.950 [2024-10-13 04:11:28.937879] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.950 [2024-10-13 04:11:28.937885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.950 [2024-10-13 04:11:28.937891] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.950 [2024-10-13 04:11:28.937896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.937902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.950 [2024-10-13 04:11:28.937908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:35.950 [2024-10-13 04:11:28.937914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.937976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.950 [2024-10-13 04:11:28.937983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.950 [2024-10-13 04:11:28.937989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:35.950 [2024-10-13 04:11:28.937996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.950 [2024-10-13 04:11:28.938071] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.950 [2024-10-13 04:11:28.938079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.950 [2024-10-13 04:11:28.938085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.951 [2024-10-13 04:11:28.938102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.951 [2024-10-13 04:11:28.938117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.951 [2024-10-13 04:11:28.938128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.951 [2024-10-13 04:11:28.938137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:35.951 [2024-10-13 04:11:28.938142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.951 [2024-10-13 04:11:28.938147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.951 [2024-10-13 04:11:28.938154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:35.951 [2024-10-13 04:11:28.938159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.951 [2024-10-13 04:11:28.938169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.951 [2024-10-13 04:11:28.938184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.951 [2024-10-13 04:11:28.938198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.951 [2024-10-13 04:11:28.938213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.951 [2024-10-13 04:11:28.938227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.951 [2024-10-13 04:11:28.938243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.951 [2024-10-13 04:11:28.938252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.951 [2024-10-13 04:11:28.938257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:35.951 [2024-10-13 04:11:28.938261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.951 [2024-10-13 04:11:28.938266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.951 [2024-10-13 04:11:28.938271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:35.951 [2024-10-13 04:11:28.938276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.951 [2024-10-13 04:11:28.938286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:35.951 [2024-10-13 04:11:28.938291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938296] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.951 [2024-10-13 04:11:28.938302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.951 [2024-10-13 04:11:28.938308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.951 [2024-10-13 04:11:28.938320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.951 [2024-10-13 04:11:28.938326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.951 [2024-10-13 04:11:28.938331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.951 [2024-10-13 04:11:28.938337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.951 [2024-10-13 04:11:28.938341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.951 [2024-10-13 04:11:28.938346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.951 [2024-10-13 04:11:28.938352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.951 [2024-10-13 04:11:28.938359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.951 [2024-10-13 04:11:28.938365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:35.951 [2024-10-13 04:11:28.938371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:35.951 [2024-10-13 04:11:28.938376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:35.951 [2024-10-13 04:11:28.938382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:35.951 [2024-10-13 04:11:28.938387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:35.951 [2024-10-13 04:11:28.938392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:35.951 [2024-10-13 04:11:28.938397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:35.951 [2024-10-13 04:11:28.938402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:35.951 [2024-10-13 04:11:28.938408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:35.951 [2024-10-13 04:11:28.938413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:35.951 [2024-10-13 04:11:28.938418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:35.951 [2024-10-13 04:11:28.938423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:35.951 [2024-10-13 04:11:28.938428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:35.951 [2024-10-13 04:11:28.938434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:35.951 [2024-10-13 04:11:28.938439] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.951 [2024-10-13 04:11:28.938445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.951 [2024-10-13 04:11:28.938451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.951 [2024-10-13 04:11:28.938457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.951 [2024-10-13 04:11:28.938463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.951 [2024-10-13 04:11:28.938468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.951 [2024-10-13 04:11:28.938474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:28.938479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.951 [2024-10-13 04:11:28.938485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:20:35.951 [2024-10-13 04:11:28.938491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:28.959406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:28.959440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.951 [2024-10-13 04:11:28.959448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.885 ms 00:20:35.951 [2024-10-13 04:11:28.959454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:28.959517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:28.959524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.951 [2024-10-13 04:11:28.959532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:35.951 [2024-10-13 04:11:28.959538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:29.005730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:29.005775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.951 [2024-10-13 04:11:29.005785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.147 ms 00:20:35.951 [2024-10-13 04:11:29.005792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:29.005844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:29.005852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.951 [2024-10-13 04:11:29.005859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.951 [2024-10-13 04:11:29.005865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:29.006199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:29.006222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.951 [2024-10-13 04:11:29.006231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:35.951 [2024-10-13 04:11:29.006236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:29.006335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:29.006346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.951 [2024-10-13 04:11:29.006353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:35.951 [2024-10-13 04:11:29.006359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:29.016910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.951 [2024-10-13 04:11:29.016937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.951 [2024-10-13 04:11:29.016945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.535 ms 00:20:35.951 [2024-10-13 04:11:29.016951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.951 [2024-10-13 04:11:29.026621] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.951 [2024-10-13 04:11:29.026651] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.951 [2024-10-13 04:11:29.026661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.952 [2024-10-13 04:11:29.026667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.952 [2024-10-13 04:11:29.026674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.616 ms 00:20:35.952 [2024-10-13 04:11:29.026680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.952 [2024-10-13 04:11:29.045040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.952 [2024-10-13 04:11:29.045068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.952 [2024-10-13 04:11:29.045084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.328 ms 00:20:35.952 [2024-10-13 04:11:29.045090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.952 [2024-10-13 04:11:29.053635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.952 [2024-10-13 04:11:29.053660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.952 [2024-10-13 04:11:29.053668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.515 ms 00:20:35.952 [2024-10-13 04:11:29.053674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.952 [2024-10-13 04:11:29.062010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.952 [2024-10-13 04:11:29.062033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.952 [2024-10-13 04:11:29.062040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.311 ms 00:20:35.952 [2024-10-13 04:11:29.062046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.952 [2024-10-13 04:11:29.062512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.952 [2024-10-13 04:11:29.062530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.952 [2024-10-13 04:11:29.062538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:20:35.952 [2024-10-13 04:11:29.062544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.105915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.105959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:36.211 [2024-10-13 04:11:29.105970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.357 ms 00:20:36.211 [2024-10-13 04:11:29.105977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.113971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:36.211 [2024-10-13 04:11:29.116098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.116136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.211 [2024-10-13 04:11:29.116148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.074 ms 00:20:36.211 [2024-10-13 04:11:29.116154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.116227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.116239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:36.211 [2024-10-13 04:11:29.116246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:36.211 [2024-10-13 04:11:29.116252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.116303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.116310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.211 [2024-10-13 04:11:29.116317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:36.211 [2024-10-13 04:11:29.116323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.116338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.116344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.211 [2024-10-13 04:11:29.116352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:36.211 [2024-10-13 04:11:29.116358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.116382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:36.211 [2024-10-13 04:11:29.116390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.116396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:36.211 [2024-10-13 04:11:29.116402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:36.211 [2024-10-13 04:11:29.116408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.134307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.134345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.211 [2024-10-13 04:11:29.134355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.885 ms 00:20:36.211 [2024-10-13 04:11:29.134361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.134470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.211 [2024-10-13 04:11:29.134490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.211 [2024-10-13 04:11:29.134498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:36.211 [2024-10-13 04:11:29.134504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.211 [2024-10-13 04:11:29.135262] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 216.143 ms, result 0 00:20:37.146  [2024-10-13T04:11:31.240Z] Copying: 51/1024 [MB] (51 MBps) [2024-10-13T04:11:32.173Z] Copying: 96/1024 [MB] (45 MBps) [2024-10-13T04:11:33.548Z] Copying: 146/1024 [MB] (49 MBps) [2024-10-13T04:11:34.482Z] Copying: 199/1024 [MB] (53 MBps) [2024-10-13T04:11:35.416Z] Copying: 249/1024 [MB] (50 MBps) [2024-10-13T04:11:36.351Z] Copying: 296/1024 [MB] (46 MBps) [2024-10-13T04:11:37.287Z] Copying: 337/1024 [MB] (40 MBps) [2024-10-13T04:11:38.286Z] Copying: 383/1024 [MB] (46 MBps) [2024-10-13T04:11:39.220Z] Copying: 409/1024 [MB] (25 MBps) [2024-10-13T04:11:40.156Z] Copying: 428/1024 [MB] (19 MBps) [2024-10-13T04:11:41.530Z] Copying: 452/1024 [MB] (23 MBps) [2024-10-13T04:11:42.464Z] Copying: 472/1024 [MB] (20 MBps) [2024-10-13T04:11:43.399Z] Copying: 509/1024 [MB] (36 MBps) [2024-10-13T04:11:44.331Z] Copying: 555/1024 [MB] (45 MBps) [2024-10-13T04:11:45.265Z] Copying: 599/1024 [MB] (43 MBps) [2024-10-13T04:11:46.200Z] Copying: 644/1024 [MB] (45 MBps) [2024-10-13T04:11:47.574Z] Copying: 689/1024 [MB] (45 MBps) [2024-10-13T04:11:48.514Z] Copying: 735/1024 [MB] (45 MBps) [2024-10-13T04:11:49.448Z] Copying: 780/1024 [MB] (45 MBps) [2024-10-13T04:11:50.391Z] Copying: 825/1024 [MB] (45 MBps) [2024-10-13T04:11:51.344Z] Copying: 871/1024 [MB] (45 MBps) [2024-10-13T04:11:52.278Z] Copying: 917/1024 [MB] (46 MBps) [2024-10-13T04:11:53.213Z] Copying: 963/1024 [MB] (45 MBps) [2024-10-13T04:11:54.588Z] Copying: 1014/1024 [MB] (50 MBps) [2024-10-13T04:11:54.588Z] Copying: 1048336/1048576 [kB] (9404 kBps) [2024-10-13T04:11:54.588Z] Copying: 1024/1024 [MB] (average 40 MBps)[2024-10-13 04:11:54.359799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.359861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:01.428 [2024-10-13 04:11:54.359874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:01.428 [2024-10-13 04:11:54.359881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.362112] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:01.428 [2024-10-13 04:11:54.364972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.365002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:01.428 [2024-10-13 04:11:54.365011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:21:01.428 [2024-10-13 04:11:54.365018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.374460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.374498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:01.428 [2024-10-13 04:11:54.374507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.874 ms 00:21:01.428 [2024-10-13 04:11:54.374514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.390565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.390595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:01.428 [2024-10-13 04:11:54.390603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.039 ms 00:21:01.428 [2024-10-13 04:11:54.390610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.395458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.395482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:01.428 [2024-10-13 04:11:54.395494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.818 ms 00:21:01.428 [2024-10-13 04:11:54.395501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.413353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.413381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:01.428 [2024-10-13 04:11:54.413389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.822 ms 00:21:01.428 [2024-10-13 04:11:54.413394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.424100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.424141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:01.428 [2024-10-13 04:11:54.424151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.681 ms 00:21:01.428 [2024-10-13 04:11:54.424157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.473705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.473746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:01.428 [2024-10-13 04:11:54.473755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.520 ms 00:21:01.428 [2024-10-13 04:11:54.473762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.491579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.491607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:01.428 [2024-10-13 04:11:54.491622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.802 ms 00:21:01.428 [2024-10-13 04:11:54.491628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.508969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.508994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:01.428 [2024-10-13 04:11:54.509001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.316 ms 00:21:01.428 [2024-10-13 04:11:54.509007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.525886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.525913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:01.428 [2024-10-13 04:11:54.525920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.856 ms 00:21:01.428 [2024-10-13 04:11:54.525925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.542719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.428 [2024-10-13 04:11:54.542744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:01.428 [2024-10-13 04:11:54.542752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.753 ms 00:21:01.428 [2024-10-13 04:11:54.542758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.428 [2024-10-13 04:11:54.542782] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:01.428 [2024-10-13 04:11:54.542793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:21:01.428 [2024-10-13 04:11:54.542801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:01.428 [2024-10-13 04:11:54.542958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.542964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.542970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.542976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.542981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.542987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.542992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.542998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:01.429 [2024-10-13 04:11:54.543491] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:01.429 [2024-10-13 04:11:54.543497] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed1e26c3-d8e3-46aa-a144-99192ed32845 00:21:01.429 [2024-10-13 04:11:54.543503] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:21:01.429 [2024-10-13 04:11:54.543508] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128704 00:21:01.429 [2024-10-13 04:11:54.543521] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:21:01.429 [2024-10-13 04:11:54.543527] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:21:01.429 [2024-10-13 04:11:54.543533] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:01.429 [2024-10-13 04:11:54.543539] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:01.429 [2024-10-13 04:11:54.543544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:01.429 [2024-10-13 04:11:54.543549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:01.429 [2024-10-13 04:11:54.543554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:01.429 [2024-10-13 04:11:54.543559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.429 [2024-10-13 04:11:54.543567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:01.429 [2024-10-13 04:11:54.543573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:21:01.429 [2024-10-13 04:11:54.543578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.429 [2024-10-13 04:11:54.552952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.429 [2024-10-13 04:11:54.552980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:01.429 [2024-10-13 04:11:54.552988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.362 ms 00:21:01.429 [2024-10-13 04:11:54.552994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.429 [2024-10-13 04:11:54.553253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.429 [2024-10-13 04:11:54.553260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:01.429 [2024-10-13 04:11:54.553266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:21:01.429 [2024-10-13 04:11:54.553271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.429 [2024-10-13 04:11:54.578495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.429 [2024-10-13 04:11:54.578522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.429 [2024-10-13 04:11:54.578530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.429 [2024-10-13 04:11:54.578536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.429 [2024-10-13 04:11:54.578582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.430 [2024-10-13 04:11:54.578588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.430 [2024-10-13 04:11:54.578594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.430 [2024-10-13 04:11:54.578600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.430 [2024-10-13 04:11:54.578656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.430 [2024-10-13 04:11:54.578665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.430 [2024-10-13 04:11:54.578671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.430 [2024-10-13 04:11:54.578676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.430 [2024-10-13 04:11:54.578687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.430 [2024-10-13 04:11:54.578693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.430 [2024-10-13 04:11:54.578698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.430 [2024-10-13 04:11:54.578704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.636961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.636997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.688 [2024-10-13 04:11:54.637005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.637011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.685176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.688 [2024-10-13 04:11:54.685184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.685190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.685257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.688 [2024-10-13 04:11:54.685264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.685270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.685303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.688 [2024-10-13 04:11:54.685309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.685315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.685385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.688 [2024-10-13 04:11:54.685394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.685400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.685428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:01.688 [2024-10-13 04:11:54.685434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.685441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.685475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.688 [2024-10-13 04:11:54.685484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.685490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.688 [2024-10-13 04:11:54.685529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.688 [2024-10-13 04:11:54.685536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.688 [2024-10-13 04:11:54.685542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.688 [2024-10-13 04:11:54.685655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 328.235 ms, result 0 00:21:04.220 00:21:04.220 00:21:04.220 04:11:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:06.120 04:11:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:06.120 [2024-10-13 04:11:59.081136] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:21:06.120 [2024-10-13 04:11:59.081249] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76837 ] 00:21:06.120 [2024-10-13 04:11:59.232445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.378 [2024-10-13 04:11:59.332781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.638 [2024-10-13 04:11:59.586818] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:06.638 [2024-10-13 04:11:59.586883] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:06.638 [2024-10-13 04:11:59.740467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.740703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:06.638 [2024-10-13 04:11:59.740723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:06.638 [2024-10-13 04:11:59.740736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.740787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.740797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.638 [2024-10-13 04:11:59.740805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:06.638 [2024-10-13 04:11:59.740814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.740834] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:06.638 [2024-10-13 04:11:59.741468] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:06.638 [2024-10-13 04:11:59.741483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.741492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.638 [2024-10-13 04:11:59.741501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:21:06.638 [2024-10-13 04:11:59.741508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.742572] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:06.638 [2024-10-13 04:11:59.754818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.754851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:06.638 [2024-10-13 04:11:59.754863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.248 ms 00:21:06.638 [2024-10-13 04:11:59.754871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.754920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.754930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:06.638 [2024-10-13 04:11:59.754940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:06.638 [2024-10-13 04:11:59.754947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.759877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.759908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.638 [2024-10-13 04:11:59.759917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.871 ms 00:21:06.638 [2024-10-13 04:11:59.759925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.760001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.760010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.638 [2024-10-13 04:11:59.760018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:06.638 [2024-10-13 04:11:59.760026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.760063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.760073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:06.638 [2024-10-13 04:11:59.760081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:06.638 [2024-10-13 04:11:59.760088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.760109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:06.638 [2024-10-13 04:11:59.763353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.763483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.638 [2024-10-13 04:11:59.763498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:21:06.638 [2024-10-13 04:11:59.763506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.763538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.763546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:06.638 [2024-10-13 04:11:59.763554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:06.638 [2024-10-13 04:11:59.763561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.763581] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:06.638 [2024-10-13 04:11:59.763598] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:06.638 [2024-10-13 04:11:59.763651] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:06.638 [2024-10-13 04:11:59.763669] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:06.638 [2024-10-13 04:11:59.763772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:06.638 [2024-10-13 04:11:59.763782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:06.638 [2024-10-13 04:11:59.763792] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:06.638 [2024-10-13 04:11:59.763802] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:06.638 [2024-10-13 04:11:59.763811] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:06.638 [2024-10-13 04:11:59.763820] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:06.638 [2024-10-13 04:11:59.763827] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:06.638 [2024-10-13 04:11:59.763834] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:06.638 [2024-10-13 04:11:59.763841] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:06.638 [2024-10-13 04:11:59.763848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.763857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:06.638 [2024-10-13 04:11:59.763869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:21:06.638 [2024-10-13 04:11:59.763876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.763957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.638 [2024-10-13 04:11:59.763965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:06.638 [2024-10-13 04:11:59.763973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:06.638 [2024-10-13 04:11:59.763979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.638 [2024-10-13 04:11:59.764090] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:06.638 [2024-10-13 04:11:59.764100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:06.638 [2024-10-13 04:11:59.764111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.638 [2024-10-13 04:11:59.764136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.638 [2024-10-13 04:11:59.764144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:06.638 [2024-10-13 04:11:59.764151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:06.638 [2024-10-13 04:11:59.764158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:06.638 [2024-10-13 04:11:59.764165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:06.638 [2024-10-13 04:11:59.764172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.639 [2024-10-13 04:11:59.764186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:06.639 [2024-10-13 04:11:59.764193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:06.639 [2024-10-13 04:11:59.764199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.639 [2024-10-13 04:11:59.764206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:06.639 [2024-10-13 04:11:59.764213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:06.639 [2024-10-13 04:11:59.764224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:06.639 [2024-10-13 04:11:59.764239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:06.639 [2024-10-13 04:11:59.764245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:06.639 [2024-10-13 04:11:59.764258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.639 [2024-10-13 04:11:59.764271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:06.639 [2024-10-13 04:11:59.764278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.639 [2024-10-13 04:11:59.764291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:06.639 [2024-10-13 04:11:59.764297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.639 [2024-10-13 04:11:59.764309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:06.639 [2024-10-13 04:11:59.764316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.639 [2024-10-13 04:11:59.764328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:06.639 [2024-10-13 04:11:59.764335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.639 [2024-10-13 04:11:59.764348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:06.639 [2024-10-13 04:11:59.764354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:06.639 [2024-10-13 04:11:59.764360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.639 [2024-10-13 04:11:59.764366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:06.639 [2024-10-13 04:11:59.764373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:06.639 [2024-10-13 04:11:59.764379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:06.639 [2024-10-13 04:11:59.764392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:06.639 [2024-10-13 04:11:59.764398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764405] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:06.639 [2024-10-13 04:11:59.764413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:06.639 [2024-10-13 04:11:59.764420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.639 [2024-10-13 04:11:59.764426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.639 [2024-10-13 04:11:59.764434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:06.639 [2024-10-13 04:11:59.764441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:06.639 [2024-10-13 04:11:59.764448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:06.639 [2024-10-13 04:11:59.764454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:06.639 [2024-10-13 04:11:59.764460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:06.639 [2024-10-13 04:11:59.764467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:06.639 [2024-10-13 04:11:59.764475] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:06.639 [2024-10-13 04:11:59.764483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.639 [2024-10-13 04:11:59.764491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:06.639 [2024-10-13 04:11:59.764499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:06.639 [2024-10-13 04:11:59.764506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:06.639 [2024-10-13 04:11:59.764513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:06.639 [2024-10-13 04:11:59.764519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:06.639 [2024-10-13 04:11:59.764526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:06.639 [2024-10-13 04:11:59.764533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:06.639 [2024-10-13 04:11:59.764540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:06.639 [2024-10-13 04:11:59.764547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:06.639 [2024-10-13 04:11:59.764554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:06.639 [2024-10-13 04:11:59.764561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:06.639 [2024-10-13 04:11:59.764568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:06.639 [2024-10-13 04:11:59.764574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:06.639 [2024-10-13 04:11:59.764581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:06.639 [2024-10-13 04:11:59.764589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:06.639 [2024-10-13 04:11:59.764596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.639 [2024-10-13 04:11:59.764607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:06.639 [2024-10-13 04:11:59.764626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:06.639 [2024-10-13 04:11:59.764633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:06.639 [2024-10-13 04:11:59.764641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:06.639 [2024-10-13 04:11:59.764649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-10-13 04:11:59.764656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:06.639 [2024-10-13 04:11:59.764664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:21:06.639 [2024-10-13 04:11:59.764671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-10-13 04:11:59.790100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-10-13 04:11:59.790225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.639 [2024-10-13 04:11:59.790240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.388 ms 00:21:06.639 [2024-10-13 04:11:59.790248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.639 [2024-10-13 04:11:59.790334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.639 [2024-10-13 04:11:59.790346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:06.639 [2024-10-13 04:11:59.790355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:06.639 [2024-10-13 04:11:59.790362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.831298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.831445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.898 [2024-10-13 04:11:59.831464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.889 ms 00:21:06.898 [2024-10-13 04:11:59.831474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.831518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.831528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.898 [2024-10-13 04:11:59.831537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:06.898 [2024-10-13 04:11:59.831544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.831911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.831928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.898 [2024-10-13 04:11:59.831937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:21:06.898 [2024-10-13 04:11:59.831944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.832065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.832074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.898 [2024-10-13 04:11:59.832082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:06.898 [2024-10-13 04:11:59.832090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.844890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.844917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.898 [2024-10-13 04:11:59.844928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.781 ms 00:21:06.898 [2024-10-13 04:11:59.844935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.856904] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:06.898 [2024-10-13 04:11:59.856934] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:06.898 [2024-10-13 04:11:59.856945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.856953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:06.898 [2024-10-13 04:11:59.856962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.898 ms 00:21:06.898 [2024-10-13 04:11:59.856969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.880837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.880963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:06.898 [2024-10-13 04:11:59.880986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.831 ms 00:21:06.898 [2024-10-13 04:11:59.880994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.892407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.892446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:06.898 [2024-10-13 04:11:59.892456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.378 ms 00:21:06.898 [2024-10-13 04:11:59.892463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.898 [2024-10-13 04:11:59.903549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.898 [2024-10-13 04:11:59.903578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:06.899 [2024-10-13 04:11:59.903589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.053 ms 00:21:06.899 [2024-10-13 04:11:59.903596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.904219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.904243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:06.899 [2024-10-13 04:11:59.904253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:21:06.899 [2024-10-13 04:11:59.904261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.957890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.957939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:06.899 [2024-10-13 04:11:59.957954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.609 ms 00:21:06.899 [2024-10-13 04:11:59.957966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.968334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:06.899 [2024-10-13 04:11:59.970787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.970817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:06.899 [2024-10-13 04:11:59.970829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.776 ms 00:21:06.899 [2024-10-13 04:11:59.970838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.970933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.970953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:06.899 [2024-10-13 04:11:59.970962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:06.899 [2024-10-13 04:11:59.970969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.972384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.972418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:06.899 [2024-10-13 04:11:59.972427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:21:06.899 [2024-10-13 04:11:59.972435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.972460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.972469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:06.899 [2024-10-13 04:11:59.972477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:06.899 [2024-10-13 04:11:59.972484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.972518] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:06.899 [2024-10-13 04:11:59.972528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.972538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:06.899 [2024-10-13 04:11:59.972546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:06.899 [2024-10-13 04:11:59.972553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.995197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.995233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:06.899 [2024-10-13 04:11:59.995244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.626 ms 00:21:06.899 [2024-10-13 04:11:59.995252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.995320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.899 [2024-10-13 04:11:59.995330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:06.899 [2024-10-13 04:11:59.995338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:06.899 [2024-10-13 04:11:59.995345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.899 [2024-10-13 04:11:59.998490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.702 ms, result 0 00:21:08.272  [2024-10-13T04:12:02.365Z] Copying: 964/1048576 [kB] (964 kBps) [2024-10-13T04:12:03.314Z] Copying: 10324/1048576 [kB] (9360 kBps) [2024-10-13T04:12:04.274Z] Copying: 63/1024 [MB] (53 MBps) [2024-10-13T04:12:05.209Z] Copying: 121/1024 [MB] (58 MBps) [2024-10-13T04:12:06.584Z] Copying: 173/1024 [MB] (51 MBps) [2024-10-13T04:12:07.518Z] Copying: 226/1024 [MB] (53 MBps) [2024-10-13T04:12:08.453Z] Copying: 281/1024 [MB] (54 MBps) [2024-10-13T04:12:09.388Z] Copying: 334/1024 [MB] (53 MBps) [2024-10-13T04:12:10.323Z] Copying: 387/1024 [MB] (53 MBps) [2024-10-13T04:12:11.257Z] Copying: 443/1024 [MB] (55 MBps) [2024-10-13T04:12:12.193Z] Copying: 497/1024 [MB] (54 MBps) [2024-10-13T04:12:13.568Z] Copying: 550/1024 [MB] (52 MBps) [2024-10-13T04:12:14.503Z] Copying: 602/1024 [MB] (51 MBps) [2024-10-13T04:12:15.448Z] Copying: 656/1024 [MB] (54 MBps) [2024-10-13T04:12:16.396Z] Copying: 710/1024 [MB] (54 MBps) [2024-10-13T04:12:17.330Z] Copying: 763/1024 [MB] (52 MBps) [2024-10-13T04:12:18.263Z] Copying: 817/1024 [MB] (54 MBps) [2024-10-13T04:12:19.197Z] Copying: 872/1024 [MB] (54 MBps) [2024-10-13T04:12:20.574Z] Copying: 906/1024 [MB] (34 MBps) [2024-10-13T04:12:21.510Z] Copying: 933/1024 [MB] (26 MBps) [2024-10-13T04:12:22.445Z] Copying: 969/1024 [MB] (35 MBps) [2024-10-13T04:12:22.445Z] Copying: 1019/1024 [MB] (49 MBps) [2024-10-13T04:12:23.381Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-10-13 04:12:23.128163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.128241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:30.221 [2024-10-13 04:12:23.128259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:30.221 [2024-10-13 04:12:23.128273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.128298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:30.221 [2024-10-13 04:12:23.131754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.131799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:30.221 [2024-10-13 04:12:23.131810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:21:30.221 [2024-10-13 04:12:23.131819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.132073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.132093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:30.221 [2024-10-13 04:12:23.132207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:21:30.221 [2024-10-13 04:12:23.132222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.141764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.141815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:30.221 [2024-10-13 04:12:23.141830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.522 ms 00:21:30.221 [2024-10-13 04:12:23.141841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.150061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.150109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:30.221 [2024-10-13 04:12:23.150121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.186 ms 00:21:30.221 [2024-10-13 04:12:23.150130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.181363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.181423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:30.221 [2024-10-13 04:12:23.181439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.176 ms 00:21:30.221 [2024-10-13 04:12:23.181449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.200206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.200266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:30.221 [2024-10-13 04:12:23.200284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.706 ms 00:21:30.221 [2024-10-13 04:12:23.200294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.201700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.201740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:30.221 [2024-10-13 04:12:23.201753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:21:30.221 [2024-10-13 04:12:23.201764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.230511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.230567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:30.221 [2024-10-13 04:12:23.230582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.726 ms 00:21:30.221 [2024-10-13 04:12:23.230591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.258253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.258309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:30.221 [2024-10-13 04:12:23.258336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.605 ms 00:21:30.221 [2024-10-13 04:12:23.258347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.277932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.277969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:30.221 [2024-10-13 04:12:23.277980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.537 ms 00:21:30.221 [2024-10-13 04:12:23.277986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.294841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.221 [2024-10-13 04:12:23.294873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:30.221 [2024-10-13 04:12:23.294881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.804 ms 00:21:30.221 [2024-10-13 04:12:23.294887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.221 [2024-10-13 04:12:23.294913] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:30.221 [2024-10-13 04:12:23.294925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:30.221 [2024-10-13 04:12:23.294934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:30.221 [2024-10-13 04:12:23.294941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:30.221 [2024-10-13 04:12:23.294947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:30.221 [2024-10-13 04:12:23.294953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:30.221 [2024-10-13 04:12:23.294959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:30.221 [2024-10-13 04:12:23.294965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.294971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.294977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.294983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.294989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.294995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:30.222 [2024-10-13 04:12:23.295537] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:30.222 [2024-10-13 04:12:23.295542] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed1e26c3-d8e3-46aa-a144-99192ed32845 00:21:30.223 [2024-10-13 04:12:23.295548] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:30.223 [2024-10-13 04:12:23.295554] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136896 00:21:30.223 [2024-10-13 04:12:23.295560] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134912 00:21:30.223 [2024-10-13 04:12:23.295566] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:21:30.223 [2024-10-13 04:12:23.295572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:30.223 [2024-10-13 04:12:23.295581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:30.223 [2024-10-13 04:12:23.295587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:30.223 [2024-10-13 04:12:23.295597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:30.223 [2024-10-13 04:12:23.295603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:30.223 [2024-10-13 04:12:23.295609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.223 [2024-10-13 04:12:23.295624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:30.223 [2024-10-13 04:12:23.295631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:21:30.223 [2024-10-13 04:12:23.295636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.223 [2024-10-13 04:12:23.305064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.223 [2024-10-13 04:12:23.305093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:30.223 [2024-10-13 04:12:23.305105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.414 ms 00:21:30.223 [2024-10-13 04:12:23.305111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.223 [2024-10-13 04:12:23.305378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.223 [2024-10-13 04:12:23.305391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:30.223 [2024-10-13 04:12:23.305397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:21:30.223 [2024-10-13 04:12:23.305403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.223 [2024-10-13 04:12:23.331092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.223 [2024-10-13 04:12:23.331130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:30.223 [2024-10-13 04:12:23.331138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.223 [2024-10-13 04:12:23.331145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.223 [2024-10-13 04:12:23.331192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.223 [2024-10-13 04:12:23.331198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:30.223 [2024-10-13 04:12:23.331204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.223 [2024-10-13 04:12:23.331210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.223 [2024-10-13 04:12:23.331267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.223 [2024-10-13 04:12:23.331275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:30.223 [2024-10-13 04:12:23.331284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.223 [2024-10-13 04:12:23.331290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.223 [2024-10-13 04:12:23.331301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.223 [2024-10-13 04:12:23.331307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:30.223 [2024-10-13 04:12:23.331313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.223 [2024-10-13 04:12:23.331319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.481 [2024-10-13 04:12:23.390923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.390970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:30.482 [2024-10-13 04:12:23.390981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.390987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.439702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.439747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:30.482 [2024-10-13 04:12:23.439758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.439764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.439808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.439815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:30.482 [2024-10-13 04:12:23.439822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.439831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.439870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.439877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:30.482 [2024-10-13 04:12:23.439883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.439889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.439955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.439962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:30.482 [2024-10-13 04:12:23.439969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.439974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.439998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.440005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:30.482 [2024-10-13 04:12:23.440011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.440017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.440045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.440051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:30.482 [2024-10-13 04:12:23.440057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.440063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.440098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.482 [2024-10-13 04:12:23.440106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:30.482 [2024-10-13 04:12:23.440119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.482 [2024-10-13 04:12:23.440136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.482 [2024-10-13 04:12:23.440225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 312.050 ms, result 0 00:21:31.417 00:21:31.417 00:21:31.674 04:12:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:33.578 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:33.578 04:12:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:33.578 [2024-10-13 04:12:26.577820] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:21:33.578 [2024-10-13 04:12:26.577916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77123 ] 00:21:33.578 [2024-10-13 04:12:26.722113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.837 [2024-10-13 04:12:26.818038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.096 [2024-10-13 04:12:27.068874] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.096 [2024-10-13 04:12:27.068938] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.096 [2024-10-13 04:12:27.223278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.096 [2024-10-13 04:12:27.223324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:34.096 [2024-10-13 04:12:27.223337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.096 [2024-10-13 04:12:27.223349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.096 [2024-10-13 04:12:27.223394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.096 [2024-10-13 04:12:27.223404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.096 [2024-10-13 04:12:27.223416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:34.096 [2024-10-13 04:12:27.223425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.096 [2024-10-13 04:12:27.223441] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:34.096 [2024-10-13 04:12:27.224096] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:34.096 [2024-10-13 04:12:27.224126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.096 [2024-10-13 04:12:27.224136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.096 [2024-10-13 04:12:27.224144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:21:34.096 [2024-10-13 04:12:27.224151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.096 [2024-10-13 04:12:27.225209] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:34.096 [2024-10-13 04:12:27.237483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.237520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:34.097 [2024-10-13 04:12:27.237531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:21:34.097 [2024-10-13 04:12:27.237538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.237587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.237597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:34.097 [2024-10-13 04:12:27.237607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:34.097 [2024-10-13 04:12:27.237626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.242622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.242654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.097 [2024-10-13 04:12:27.242663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.928 ms 00:21:34.097 [2024-10-13 04:12:27.242671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.242739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.242747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.097 [2024-10-13 04:12:27.242755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:34.097 [2024-10-13 04:12:27.242762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.242800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.242809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:34.097 [2024-10-13 04:12:27.242817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:34.097 [2024-10-13 04:12:27.242824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.242844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.097 [2024-10-13 04:12:27.246072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.246102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.097 [2024-10-13 04:12:27.246111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:21:34.097 [2024-10-13 04:12:27.246118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.246148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.246156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:34.097 [2024-10-13 04:12:27.246163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:34.097 [2024-10-13 04:12:27.246171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.246189] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:34.097 [2024-10-13 04:12:27.246206] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:34.097 [2024-10-13 04:12:27.246239] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:34.097 [2024-10-13 04:12:27.246256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:34.097 [2024-10-13 04:12:27.246358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:34.097 [2024-10-13 04:12:27.246375] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:34.097 [2024-10-13 04:12:27.246385] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:34.097 [2024-10-13 04:12:27.246395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246404] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:34.097 [2024-10-13 04:12:27.246419] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:34.097 [2024-10-13 04:12:27.246427] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:34.097 [2024-10-13 04:12:27.246434] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:34.097 [2024-10-13 04:12:27.246441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.246450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:34.097 [2024-10-13 04:12:27.246458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:21:34.097 [2024-10-13 04:12:27.246465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.246547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.097 [2024-10-13 04:12:27.246556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:34.097 [2024-10-13 04:12:27.246563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:34.097 [2024-10-13 04:12:27.246571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.097 [2024-10-13 04:12:27.246682] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:34.097 [2024-10-13 04:12:27.246693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:34.097 [2024-10-13 04:12:27.246704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:34.097 [2024-10-13 04:12:27.246726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:34.097 [2024-10-13 04:12:27.246746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.097 [2024-10-13 04:12:27.246759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:34.097 [2024-10-13 04:12:27.246766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:34.097 [2024-10-13 04:12:27.246772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.097 [2024-10-13 04:12:27.246779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:34.097 [2024-10-13 04:12:27.246785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:34.097 [2024-10-13 04:12:27.246796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:34.097 [2024-10-13 04:12:27.246812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:34.097 [2024-10-13 04:12:27.246831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:34.097 [2024-10-13 04:12:27.246851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:34.097 [2024-10-13 04:12:27.246870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:34.097 [2024-10-13 04:12:27.246889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:34.097 [2024-10-13 04:12:27.246908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.097 [2024-10-13 04:12:27.246921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:34.097 [2024-10-13 04:12:27.246927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:34.097 [2024-10-13 04:12:27.246933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.097 [2024-10-13 04:12:27.246940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:34.097 [2024-10-13 04:12:27.246946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:34.097 [2024-10-13 04:12:27.246953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:34.097 [2024-10-13 04:12:27.246965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:34.097 [2024-10-13 04:12:27.246971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.097 [2024-10-13 04:12:27.246977] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:34.097 [2024-10-13 04:12:27.246984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:34.097 [2024-10-13 04:12:27.246990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.097 [2024-10-13 04:12:27.246997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.097 [2024-10-13 04:12:27.247005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:34.097 [2024-10-13 04:12:27.247011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:34.097 [2024-10-13 04:12:27.247019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:34.097 [2024-10-13 04:12:27.247026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:34.097 [2024-10-13 04:12:27.247032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:34.097 [2024-10-13 04:12:27.247038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:34.097 [2024-10-13 04:12:27.247046] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:34.097 [2024-10-13 04:12:27.247056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.097 [2024-10-13 04:12:27.247064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:34.097 [2024-10-13 04:12:27.247071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:34.098 [2024-10-13 04:12:27.247078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:34.098 [2024-10-13 04:12:27.247084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:34.098 [2024-10-13 04:12:27.247092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:34.098 [2024-10-13 04:12:27.247099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:34.098 [2024-10-13 04:12:27.247105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:34.098 [2024-10-13 04:12:27.247112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:34.098 [2024-10-13 04:12:27.247119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:34.098 [2024-10-13 04:12:27.247126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:34.098 [2024-10-13 04:12:27.247133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:34.098 [2024-10-13 04:12:27.247140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:34.098 [2024-10-13 04:12:27.247147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:34.098 [2024-10-13 04:12:27.247154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:34.098 [2024-10-13 04:12:27.247161] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:34.098 [2024-10-13 04:12:27.247169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.098 [2024-10-13 04:12:27.247179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:34.098 [2024-10-13 04:12:27.247186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:34.098 [2024-10-13 04:12:27.247193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:34.098 [2024-10-13 04:12:27.247200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:34.098 [2024-10-13 04:12:27.247207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.098 [2024-10-13 04:12:27.247215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:34.098 [2024-10-13 04:12:27.247222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:21:34.098 [2024-10-13 04:12:27.247229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.272851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.272886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.357 [2024-10-13 04:12:27.272896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.570 ms 00:21:34.357 [2024-10-13 04:12:27.272903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.272981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.272991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:34.357 [2024-10-13 04:12:27.272999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:34.357 [2024-10-13 04:12:27.273006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.318193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.318234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.357 [2024-10-13 04:12:27.318246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.140 ms 00:21:34.357 [2024-10-13 04:12:27.318255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.318292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.318302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.357 [2024-10-13 04:12:27.318310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:34.357 [2024-10-13 04:12:27.318317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.318693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.318717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.357 [2024-10-13 04:12:27.318726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:21:34.357 [2024-10-13 04:12:27.318733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.318859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.318868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.357 [2024-10-13 04:12:27.318876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:21:34.357 [2024-10-13 04:12:27.318883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.331808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.331840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.357 [2024-10-13 04:12:27.331851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.901 ms 00:21:34.357 [2024-10-13 04:12:27.331861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.344032] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:34.357 [2024-10-13 04:12:27.344067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:34.357 [2024-10-13 04:12:27.344078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.344085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:34.357 [2024-10-13 04:12:27.344093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.129 ms 00:21:34.357 [2024-10-13 04:12:27.344100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.367862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.367898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:34.357 [2024-10-13 04:12:27.367913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.716 ms 00:21:34.357 [2024-10-13 04:12:27.367920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.379455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.379488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:34.357 [2024-10-13 04:12:27.379496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.495 ms 00:21:34.357 [2024-10-13 04:12:27.379504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.390556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.390588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:34.357 [2024-10-13 04:12:27.390597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.022 ms 00:21:34.357 [2024-10-13 04:12:27.390604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.391190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.391216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:34.357 [2024-10-13 04:12:27.391225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:21:34.357 [2024-10-13 04:12:27.391232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.445444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.445495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:34.357 [2024-10-13 04:12:27.445507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.193 ms 00:21:34.357 [2024-10-13 04:12:27.445520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.455887] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:34.357 [2024-10-13 04:12:27.457983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.458011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:34.357 [2024-10-13 04:12:27.458023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.420 ms 00:21:34.357 [2024-10-13 04:12:27.458030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.458114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.458124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:34.357 [2024-10-13 04:12:27.458133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:34.357 [2024-10-13 04:12:27.458140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.458720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.458746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:34.357 [2024-10-13 04:12:27.458755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:21:34.357 [2024-10-13 04:12:27.458763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.458785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.458793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:34.357 [2024-10-13 04:12:27.458801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:34.357 [2024-10-13 04:12:27.458809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.458840] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:34.357 [2024-10-13 04:12:27.458850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.458860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:34.357 [2024-10-13 04:12:27.458867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:34.357 [2024-10-13 04:12:27.458874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.481909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.481943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:34.357 [2024-10-13 04:12:27.481954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.018 ms 00:21:34.357 [2024-10-13 04:12:27.481962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.482032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.357 [2024-10-13 04:12:27.482042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:34.357 [2024-10-13 04:12:27.482051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:34.357 [2024-10-13 04:12:27.482058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.357 [2024-10-13 04:12:27.483167] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 259.479 ms, result 0 00:21:35.778  [2024-10-13T04:12:29.874Z] Copying: 45/1024 [MB] (45 MBps) [2024-10-13T04:12:30.809Z] Copying: 95/1024 [MB] (49 MBps) [2024-10-13T04:12:31.745Z] Copying: 142/1024 [MB] (47 MBps) [2024-10-13T04:12:32.680Z] Copying: 188/1024 [MB] (46 MBps) [2024-10-13T04:12:34.056Z] Copying: 237/1024 [MB] (49 MBps) [2024-10-13T04:12:34.992Z] Copying: 288/1024 [MB] (50 MBps) [2024-10-13T04:12:35.928Z] Copying: 337/1024 [MB] (49 MBps) [2024-10-13T04:12:36.864Z] Copying: 386/1024 [MB] (48 MBps) [2024-10-13T04:12:37.800Z] Copying: 436/1024 [MB] (49 MBps) [2024-10-13T04:12:38.735Z] Copying: 484/1024 [MB] (48 MBps) [2024-10-13T04:12:39.672Z] Copying: 534/1024 [MB] (49 MBps) [2024-10-13T04:12:41.047Z] Copying: 581/1024 [MB] (47 MBps) [2024-10-13T04:12:41.986Z] Copying: 630/1024 [MB] (48 MBps) [2024-10-13T04:12:42.922Z] Copying: 681/1024 [MB] (50 MBps) [2024-10-13T04:12:43.857Z] Copying: 730/1024 [MB] (49 MBps) [2024-10-13T04:12:44.791Z] Copying: 780/1024 [MB] (49 MBps) [2024-10-13T04:12:45.725Z] Copying: 830/1024 [MB] (50 MBps) [2024-10-13T04:12:46.657Z] Copying: 880/1024 [MB] (49 MBps) [2024-10-13T04:12:48.030Z] Copying: 930/1024 [MB] (50 MBps) [2024-10-13T04:12:48.596Z] Copying: 982/1024 [MB] (52 MBps) [2024-10-13T04:12:48.596Z] Copying: 1024/1024 [MB] (average 49 MBps)[2024-10-13 04:12:48.555858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.436 [2024-10-13 04:12:48.555908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:55.436 [2024-10-13 04:12:48.555923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:55.436 [2024-10-13 04:12:48.555931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.436 [2024-10-13 04:12:48.555951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.436 [2024-10-13 04:12:48.558565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.436 [2024-10-13 04:12:48.558593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:55.436 [2024-10-13 04:12:48.558604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:21:55.436 [2024-10-13 04:12:48.558620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.436 [2024-10-13 04:12:48.558854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.436 [2024-10-13 04:12:48.558871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:55.436 [2024-10-13 04:12:48.558880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:21:55.436 [2024-10-13 04:12:48.558888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.436 [2024-10-13 04:12:48.562319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.436 [2024-10-13 04:12:48.562335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:55.436 [2024-10-13 04:12:48.562344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.419 ms 00:21:55.436 [2024-10-13 04:12:48.562352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.436 [2024-10-13 04:12:48.569781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.436 [2024-10-13 04:12:48.569810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:55.436 [2024-10-13 04:12:48.569820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.413 ms 00:21:55.436 [2024-10-13 04:12:48.569828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.436 [2024-10-13 04:12:48.595451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.436 [2024-10-13 04:12:48.595485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:55.436 [2024-10-13 04:12:48.595497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.572 ms 00:21:55.436 [2024-10-13 04:12:48.595504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.696 [2024-10-13 04:12:48.608934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.696 [2024-10-13 04:12:48.608965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:55.696 [2024-10-13 04:12:48.608976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.409 ms 00:21:55.696 [2024-10-13 04:12:48.608984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.696 [2024-10-13 04:12:48.610762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.696 [2024-10-13 04:12:48.610788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:55.696 [2024-10-13 04:12:48.610802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:21:55.696 [2024-10-13 04:12:48.610810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.696 [2024-10-13 04:12:48.633755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.696 [2024-10-13 04:12:48.633783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:55.696 [2024-10-13 04:12:48.633793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.930 ms 00:21:55.696 [2024-10-13 04:12:48.633800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.696 [2024-10-13 04:12:48.656138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.696 [2024-10-13 04:12:48.656172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:55.696 [2024-10-13 04:12:48.656181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.320 ms 00:21:55.696 [2024-10-13 04:12:48.656188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.696 [2024-10-13 04:12:48.678717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.696 [2024-10-13 04:12:48.678742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:55.696 [2024-10-13 04:12:48.678752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.511 ms 00:21:55.696 [2024-10-13 04:12:48.678759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.696 [2024-10-13 04:12:48.700944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.696 [2024-10-13 04:12:48.700970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:55.696 [2024-10-13 04:12:48.700979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.145 ms 00:21:55.696 [2024-10-13 04:12:48.700986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.696 [2024-10-13 04:12:48.701002] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:55.696 [2024-10-13 04:12:48.701015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:55.696 [2024-10-13 04:12:48.701025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:55.696 [2024-10-13 04:12:48.701033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:55.696 [2024-10-13 04:12:48.701550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:55.697 [2024-10-13 04:12:48.701813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:55.697 [2024-10-13 04:12:48.701824] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed1e26c3-d8e3-46aa-a144-99192ed32845 00:21:55.697 [2024-10-13 04:12:48.701832] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:55.697 [2024-10-13 04:12:48.701842] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:55.697 [2024-10-13 04:12:48.701849] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:55.697 [2024-10-13 04:12:48.701857] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:55.697 [2024-10-13 04:12:48.701867] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:55.697 [2024-10-13 04:12:48.701875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:55.697 [2024-10-13 04:12:48.701887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:55.697 [2024-10-13 04:12:48.701894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:55.697 [2024-10-13 04:12:48.701901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:55.697 [2024-10-13 04:12:48.701908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.697 [2024-10-13 04:12:48.701919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:55.697 [2024-10-13 04:12:48.701927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:21:55.697 [2024-10-13 04:12:48.701934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.697 [2024-10-13 04:12:48.714426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.697 [2024-10-13 04:12:48.714451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:55.697 [2024-10-13 04:12:48.714461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.476 ms 00:21:55.697 [2024-10-13 04:12:48.714469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.697 [2024-10-13 04:12:48.714826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.697 [2024-10-13 04:12:48.714848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:55.697 [2024-10-13 04:12:48.714856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:21:55.697 [2024-10-13 04:12:48.714866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.697 [2024-10-13 04:12:48.747185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.697 [2024-10-13 04:12:48.747214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.697 [2024-10-13 04:12:48.747224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.697 [2024-10-13 04:12:48.747232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.697 [2024-10-13 04:12:48.747282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.697 [2024-10-13 04:12:48.747291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.697 [2024-10-13 04:12:48.747299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.697 [2024-10-13 04:12:48.747310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.697 [2024-10-13 04:12:48.747365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.697 [2024-10-13 04:12:48.747375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.697 [2024-10-13 04:12:48.747383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.697 [2024-10-13 04:12:48.747390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.697 [2024-10-13 04:12:48.747405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.697 [2024-10-13 04:12:48.747412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.697 [2024-10-13 04:12:48.747420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.697 [2024-10-13 04:12:48.747429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.697 [2024-10-13 04:12:48.824185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.697 [2024-10-13 04:12:48.824226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.697 [2024-10-13 04:12:48.824237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.697 [2024-10-13 04:12:48.824244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.955 [2024-10-13 04:12:48.887927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.955 [2024-10-13 04:12:48.887973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.955 [2024-10-13 04:12:48.887984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.955 [2024-10-13 04:12:48.887992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.956 [2024-10-13 04:12:48.888060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.956 [2024-10-13 04:12:48.888069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.956 [2024-10-13 04:12:48.888077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.956 [2024-10-13 04:12:48.888085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.956 [2024-10-13 04:12:48.888135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.956 [2024-10-13 04:12:48.888144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.956 [2024-10-13 04:12:48.888152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.956 [2024-10-13 04:12:48.888160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.956 [2024-10-13 04:12:48.888246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.956 [2024-10-13 04:12:48.888255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.956 [2024-10-13 04:12:48.888263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.956 [2024-10-13 04:12:48.888270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.956 [2024-10-13 04:12:48.888300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.956 [2024-10-13 04:12:48.888308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.956 [2024-10-13 04:12:48.888316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.956 [2024-10-13 04:12:48.888324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.956 [2024-10-13 04:12:48.888357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.956 [2024-10-13 04:12:48.888368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.956 [2024-10-13 04:12:48.888376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.956 [2024-10-13 04:12:48.888383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.956 [2024-10-13 04:12:48.888421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.956 [2024-10-13 04:12:48.888430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.956 [2024-10-13 04:12:48.888438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.956 [2024-10-13 04:12:48.888447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.956 [2024-10-13 04:12:48.888552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.670 ms, result 0 00:21:56.521 00:21:56.521 00:21:56.521 04:12:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:59.051 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 75909 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 75909 ']' 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 75909 00:21:59.051 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75909) - No such process 00:21:59.051 Process with pid 75909 is not found 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 75909 is not found' 00:21:59.051 04:12:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:21:59.051 Remove shared memory files 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:21:59.051 00:21:59.051 real 2m16.304s 00:21:59.051 user 2m32.006s 00:21:59.051 sys 0m22.070s 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.051 04:12:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:59.051 ************************************ 00:21:59.051 END TEST ftl_dirty_shutdown 00:21:59.051 ************************************ 00:21:59.051 04:12:52 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:21:59.051 04:12:52 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:59.051 04:12:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.051 04:12:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:59.051 ************************************ 00:21:59.051 START TEST ftl_upgrade_shutdown 00:21:59.051 ************************************ 00:21:59.051 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:21:59.310 * Looking for test storage... 00:21:59.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.310 --rc genhtml_branch_coverage=1 00:21:59.310 --rc genhtml_function_coverage=1 00:21:59.310 --rc genhtml_legend=1 00:21:59.310 --rc geninfo_all_blocks=1 00:21:59.310 --rc geninfo_unexecuted_blocks=1 00:21:59.310 00:21:59.310 ' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.310 --rc genhtml_branch_coverage=1 00:21:59.310 --rc genhtml_function_coverage=1 00:21:59.310 --rc genhtml_legend=1 00:21:59.310 --rc geninfo_all_blocks=1 00:21:59.310 --rc geninfo_unexecuted_blocks=1 00:21:59.310 00:21:59.310 ' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.310 --rc genhtml_branch_coverage=1 00:21:59.310 --rc genhtml_function_coverage=1 00:21:59.310 --rc genhtml_legend=1 00:21:59.310 --rc geninfo_all_blocks=1 00:21:59.310 --rc geninfo_unexecuted_blocks=1 00:21:59.310 00:21:59.310 ' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.310 --rc genhtml_branch_coverage=1 00:21:59.310 --rc genhtml_function_coverage=1 00:21:59.310 --rc genhtml_legend=1 00:21:59.310 --rc geninfo_all_blocks=1 00:21:59.310 --rc geninfo_unexecuted_blocks=1 00:21:59.310 00:21:59.310 ' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:59.310 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77459 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77459 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77459 ']' 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:59.311 04:12:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:21:59.311 [2024-10-13 04:12:52.420755] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:21:59.311 [2024-10-13 04:12:52.420873] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77459 ] 00:21:59.569 [2024-10-13 04:12:52.565818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.569 [2024-10-13 04:12:52.665139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:00.171 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:22:00.429 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:22:00.430 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:00.430 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:22:00.430 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:22:00.430 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:00.430 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:00.430 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:00.430 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:22:00.688 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:00.688 { 00:22:00.688 "name": "basen1", 00:22:00.688 "aliases": [ 00:22:00.688 "ba5ee82e-1691-4ba2-8070-8d40cea4b8e0" 00:22:00.688 ], 00:22:00.688 "product_name": "NVMe disk", 00:22:00.688 "block_size": 4096, 00:22:00.688 "num_blocks": 1310720, 00:22:00.688 "uuid": "ba5ee82e-1691-4ba2-8070-8d40cea4b8e0", 00:22:00.688 "numa_id": -1, 00:22:00.688 "assigned_rate_limits": { 00:22:00.688 "rw_ios_per_sec": 0, 00:22:00.688 "rw_mbytes_per_sec": 0, 00:22:00.688 "r_mbytes_per_sec": 0, 00:22:00.688 "w_mbytes_per_sec": 0 00:22:00.688 }, 00:22:00.688 "claimed": true, 00:22:00.688 "claim_type": "read_many_write_one", 00:22:00.688 "zoned": false, 00:22:00.688 "supported_io_types": { 00:22:00.688 "read": true, 00:22:00.688 "write": true, 00:22:00.688 "unmap": true, 00:22:00.688 "flush": true, 00:22:00.688 "reset": true, 00:22:00.688 "nvme_admin": true, 00:22:00.688 "nvme_io": true, 00:22:00.688 "nvme_io_md": false, 00:22:00.688 "write_zeroes": true, 00:22:00.688 "zcopy": false, 00:22:00.688 "get_zone_info": false, 00:22:00.688 "zone_management": false, 00:22:00.688 "zone_append": false, 00:22:00.688 "compare": true, 00:22:00.688 "compare_and_write": false, 00:22:00.688 "abort": true, 00:22:00.688 "seek_hole": false, 00:22:00.688 "seek_data": false, 00:22:00.688 "copy": true, 00:22:00.688 "nvme_iov_md": false 00:22:00.688 }, 00:22:00.688 "driver_specific": { 00:22:00.688 "nvme": [ 00:22:00.688 { 00:22:00.688 "pci_address": "0000:00:11.0", 00:22:00.688 "trid": { 00:22:00.688 "trtype": "PCIe", 00:22:00.688 "traddr": "0000:00:11.0" 00:22:00.688 }, 00:22:00.688 "ctrlr_data": { 00:22:00.688 "cntlid": 0, 00:22:00.688 "vendor_id": "0x1b36", 00:22:00.688 "model_number": "QEMU NVMe Ctrl", 00:22:00.688 "serial_number": "12341", 00:22:00.688 "firmware_revision": "8.0.0", 00:22:00.689 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:00.689 "oacs": { 00:22:00.689 "security": 0, 00:22:00.689 "format": 1, 00:22:00.689 "firmware": 0, 00:22:00.689 "ns_manage": 1 00:22:00.689 }, 00:22:00.689 "multi_ctrlr": false, 00:22:00.689 "ana_reporting": false 00:22:00.689 }, 00:22:00.689 "vs": { 00:22:00.689 "nvme_version": "1.4" 00:22:00.689 }, 00:22:00.689 "ns_data": { 00:22:00.689 "id": 1, 00:22:00.689 "can_share": false 00:22:00.689 } 00:22:00.689 } 00:22:00.689 ], 00:22:00.689 "mp_policy": "active_passive" 00:22:00.689 } 00:22:00.689 } 00:22:00.689 ]' 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:00.689 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:00.947 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ff699a21-bcf2-4700-bd4a-a69c499fd4fa 00:22:00.947 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:00.947 04:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff699a21-bcf2-4700-bd4a-a69c499fd4fa 00:22:01.205 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=839809c3-d701-40d8-a5c8-463058a9d96d 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 839809c3-d701-40d8-a5c8-463058a9d96d 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f ]] 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f 5120 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f 00:22:01.463 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:01.721 { 00:22:01.721 "name": "6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f", 00:22:01.721 "aliases": [ 00:22:01.721 "lvs/basen1p0" 00:22:01.721 ], 00:22:01.721 "product_name": "Logical Volume", 00:22:01.721 "block_size": 4096, 00:22:01.721 "num_blocks": 5242880, 00:22:01.721 "uuid": "6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f", 00:22:01.721 "assigned_rate_limits": { 00:22:01.721 "rw_ios_per_sec": 0, 00:22:01.721 "rw_mbytes_per_sec": 0, 00:22:01.721 "r_mbytes_per_sec": 0, 00:22:01.721 "w_mbytes_per_sec": 0 00:22:01.721 }, 00:22:01.721 "claimed": false, 00:22:01.721 "zoned": false, 00:22:01.721 "supported_io_types": { 00:22:01.721 "read": true, 00:22:01.721 "write": true, 00:22:01.721 "unmap": true, 00:22:01.721 "flush": false, 00:22:01.721 "reset": true, 00:22:01.721 "nvme_admin": false, 00:22:01.721 "nvme_io": false, 00:22:01.721 "nvme_io_md": false, 00:22:01.721 "write_zeroes": true, 00:22:01.721 "zcopy": false, 00:22:01.721 "get_zone_info": false, 00:22:01.721 "zone_management": false, 00:22:01.721 "zone_append": false, 00:22:01.721 "compare": false, 00:22:01.721 "compare_and_write": false, 00:22:01.721 "abort": false, 00:22:01.721 "seek_hole": true, 00:22:01.721 "seek_data": true, 00:22:01.721 "copy": false, 00:22:01.721 "nvme_iov_md": false 00:22:01.721 }, 00:22:01.721 "driver_specific": { 00:22:01.721 "lvol": { 00:22:01.721 "lvol_store_uuid": "839809c3-d701-40d8-a5c8-463058a9d96d", 00:22:01.721 "base_bdev": "basen1", 00:22:01.721 "thin_provision": true, 00:22:01.721 "num_allocated_clusters": 0, 00:22:01.721 "snapshot": false, 00:22:01.721 "clone": false, 00:22:01.721 "esnap_clone": false 00:22:01.721 } 00:22:01.721 } 00:22:01.721 } 00:22:01.721 ]' 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:22:01.721 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:22:01.982 04:12:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:22:01.982 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:22:01.982 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:01.982 04:12:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:22:01.982 04:12:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:22:01.982 04:12:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:22:01.982 04:12:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:22:02.240 04:12:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:22:02.240 04:12:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:22:02.240 04:12:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6f5ed80a-7d68-4a83-b69d-c01ab9d1a91f -c cachen1p0 --l2p_dram_limit 2 00:22:02.500 [2024-10-13 04:12:55.518793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.519165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:02.500 [2024-10-13 04:12:55.519225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:02.500 [2024-10-13 04:12:55.519260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.519340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.519377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:02.500 [2024-10-13 04:12:55.519411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:22:02.500 [2024-10-13 04:12:55.519442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.519481] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:02.500 [2024-10-13 04:12:55.520118] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:02.500 [2024-10-13 04:12:55.520202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.520229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:02.500 [2024-10-13 04:12:55.520266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.723 ms 00:22:02.500 [2024-10-13 04:12:55.520298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.520429] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 32b67c36-f0ef-4eb8-a43b-d7ee19b6a02f 00:22:02.500 [2024-10-13 04:12:55.521498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.521587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:22:02.500 [2024-10-13 04:12:55.521641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:22:02.500 [2024-10-13 04:12:55.521682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.526598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.526687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:02.500 [2024-10-13 04:12:55.526723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.861 ms 00:22:02.500 [2024-10-13 04:12:55.526757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.526812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.526845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:02.500 [2024-10-13 04:12:55.526877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:02.500 [2024-10-13 04:12:55.526914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.526981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.527018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:02.500 [2024-10-13 04:12:55.527049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:02.500 [2024-10-13 04:12:55.527082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.527129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:02.500 [2024-10-13 04:12:55.530056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.530127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:02.500 [2024-10-13 04:12:55.530164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.930 ms 00:22:02.500 [2024-10-13 04:12:55.530198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.530242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.530273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:02.500 [2024-10-13 04:12:55.530312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:02.500 [2024-10-13 04:12:55.530346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.530382] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:22:02.500 [2024-10-13 04:12:55.530517] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:02.500 [2024-10-13 04:12:55.530563] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:02.500 [2024-10-13 04:12:55.530601] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:02.500 [2024-10-13 04:12:55.530650] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:02.500 [2024-10-13 04:12:55.530687] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:02.500 [2024-10-13 04:12:55.530722] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:02.500 [2024-10-13 04:12:55.530751] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:02.500 [2024-10-13 04:12:55.530782] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:02.500 [2024-10-13 04:12:55.530814] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:02.500 [2024-10-13 04:12:55.530847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.530875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:02.500 [2024-10-13 04:12:55.530907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.466 ms 00:22:02.500 [2024-10-13 04:12:55.530937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.531023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.500 [2024-10-13 04:12:55.531057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:02.500 [2024-10-13 04:12:55.531091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:22:02.500 [2024-10-13 04:12:55.531126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.500 [2024-10-13 04:12:55.531236] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:02.500 [2024-10-13 04:12:55.531279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:02.500 [2024-10-13 04:12:55.531315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:02.500 [2024-10-13 04:12:55.531346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.531381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:02.500 [2024-10-13 04:12:55.531416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.531454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:02.500 [2024-10-13 04:12:55.531488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:02.500 [2024-10-13 04:12:55.531520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:02.500 [2024-10-13 04:12:55.531553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.531581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:02.500 [2024-10-13 04:12:55.531634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:02.500 [2024-10-13 04:12:55.531671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.531698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:02.500 [2024-10-13 04:12:55.531731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:02.500 [2024-10-13 04:12:55.531762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.531789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:02.500 [2024-10-13 04:12:55.531820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:02.500 [2024-10-13 04:12:55.531856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.531884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:02.500 [2024-10-13 04:12:55.531918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:02.500 [2024-10-13 04:12:55.531952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:02.500 [2024-10-13 04:12:55.531983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:02.500 [2024-10-13 04:12:55.532012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:02.500 [2024-10-13 04:12:55.532044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:02.500 [2024-10-13 04:12:55.532071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:02.500 [2024-10-13 04:12:55.532117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:02.500 [2024-10-13 04:12:55.532148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:02.500 [2024-10-13 04:12:55.532179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:02.500 [2024-10-13 04:12:55.532205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:02.500 [2024-10-13 04:12:55.532236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:02.500 [2024-10-13 04:12:55.532261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:02.500 [2024-10-13 04:12:55.532291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:02.500 [2024-10-13 04:12:55.532326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.532353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:02.500 [2024-10-13 04:12:55.532380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:02.500 [2024-10-13 04:12:55.532413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.532441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:02.500 [2024-10-13 04:12:55.532476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:02.500 [2024-10-13 04:12:55.532504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.532531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:02.500 [2024-10-13 04:12:55.532563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:02.500 [2024-10-13 04:12:55.532598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.500 [2024-10-13 04:12:55.532638] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:02.500 [2024-10-13 04:12:55.532667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:02.500 [2024-10-13 04:12:55.532704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:02.501 [2024-10-13 04:12:55.532736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:02.501 [2024-10-13 04:12:55.532765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:02.501 [2024-10-13 04:12:55.532800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:02.501 [2024-10-13 04:12:55.532835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:02.501 [2024-10-13 04:12:55.532872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:02.501 [2024-10-13 04:12:55.532903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:02.501 [2024-10-13 04:12:55.532937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:02.501 [2024-10-13 04:12:55.532971] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:02.501 [2024-10-13 04:12:55.533002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:02.501 [2024-10-13 04:12:55.533071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:02.501 [2024-10-13 04:12:55.533166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:02.501 [2024-10-13 04:12:55.533198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:02.501 [2024-10-13 04:12:55.533228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:02.501 [2024-10-13 04:12:55.533261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:02.501 [2024-10-13 04:12:55.533312] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:02.501 [2024-10-13 04:12:55.533320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.501 [2024-10-13 04:12:55.533335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:02.501 [2024-10-13 04:12:55.533341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:02.501 [2024-10-13 04:12:55.533348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:02.501 [2024-10-13 04:12:55.533354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:02.501 [2024-10-13 04:12:55.533361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:02.501 [2024-10-13 04:12:55.533367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.172 ms 00:22:02.501 [2024-10-13 04:12:55.533374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:02.501 [2024-10-13 04:12:55.533407] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:02.501 [2024-10-13 04:12:55.533416] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:04.403 [2024-10-13 04:12:57.544290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.403 [2024-10-13 04:12:57.544350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:04.403 [2024-10-13 04:12:57.544365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2010.874 ms 00:22:04.403 [2024-10-13 04:12:57.544375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.569509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.569552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:04.662 [2024-10-13 04:12:57.569564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.929 ms 00:22:04.662 [2024-10-13 04:12:57.569574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.569659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.569672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:04.662 [2024-10-13 04:12:57.569680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:04.662 [2024-10-13 04:12:57.569691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.599680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.599716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:04.662 [2024-10-13 04:12:57.599726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.955 ms 00:22:04.662 [2024-10-13 04:12:57.599735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.599763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.599774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:04.662 [2024-10-13 04:12:57.599782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:04.662 [2024-10-13 04:12:57.599793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.600128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.600145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:04.662 [2024-10-13 04:12:57.600153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:22:04.662 [2024-10-13 04:12:57.600162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.600206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.600217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:04.662 [2024-10-13 04:12:57.600225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:22:04.662 [2024-10-13 04:12:57.600236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.614016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.614044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:04.662 [2024-10-13 04:12:57.614053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.760 ms 00:22:04.662 [2024-10-13 04:12:57.614064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.625219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:04.662 [2024-10-13 04:12:57.626036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.626059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:04.662 [2024-10-13 04:12:57.626069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.887 ms 00:22:04.662 [2024-10-13 04:12:57.626077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.661959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.661999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:22:04.662 [2024-10-13 04:12:57.662014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.855 ms 00:22:04.662 [2024-10-13 04:12:57.662022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.662 [2024-10-13 04:12:57.662110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.662 [2024-10-13 04:12:57.662121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:04.663 [2024-10-13 04:12:57.662134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:22:04.663 [2024-10-13 04:12:57.662144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.663 [2024-10-13 04:12:57.684572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.663 [2024-10-13 04:12:57.684603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:22:04.663 [2024-10-13 04:12:57.684629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.372 ms 00:22:04.663 [2024-10-13 04:12:57.684638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.663 [2024-10-13 04:12:57.707016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.663 [2024-10-13 04:12:57.707046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:22:04.663 [2024-10-13 04:12:57.707058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.339 ms 00:22:04.663 [2024-10-13 04:12:57.707065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.663 [2024-10-13 04:12:57.707640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.663 [2024-10-13 04:12:57.707657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:04.663 [2024-10-13 04:12:57.707667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:22:04.663 [2024-10-13 04:12:57.707674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.663 [2024-10-13 04:12:57.774755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.663 [2024-10-13 04:12:57.774789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:22:04.663 [2024-10-13 04:12:57.774805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 67.047 ms 00:22:04.663 [2024-10-13 04:12:57.774813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.663 [2024-10-13 04:12:57.798450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.663 [2024-10-13 04:12:57.798484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:22:04.663 [2024-10-13 04:12:57.798506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.561 ms 00:22:04.663 [2024-10-13 04:12:57.798514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.663 [2024-10-13 04:12:57.821518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.663 [2024-10-13 04:12:57.821548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:22:04.663 [2024-10-13 04:12:57.821560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.964 ms 00:22:04.663 [2024-10-13 04:12:57.821568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.921 [2024-10-13 04:12:57.844404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.921 [2024-10-13 04:12:57.844433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:04.921 [2024-10-13 04:12:57.844445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.798 ms 00:22:04.921 [2024-10-13 04:12:57.844453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.921 [2024-10-13 04:12:57.844493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.922 [2024-10-13 04:12:57.844502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:04.922 [2024-10-13 04:12:57.844514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:04.922 [2024-10-13 04:12:57.844521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.922 [2024-10-13 04:12:57.844595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:04.922 [2024-10-13 04:12:57.844607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:04.922 [2024-10-13 04:12:57.844628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:22:04.922 [2024-10-13 04:12:57.844635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:04.922 [2024-10-13 04:12:57.845777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2326.556 ms, result 0 00:22:04.922 { 00:22:04.922 "name": "ftl", 00:22:04.922 "uuid": "32b67c36-f0ef-4eb8-a43b-d7ee19b6a02f" 00:22:04.922 } 00:22:04.922 04:12:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:22:04.922 [2024-10-13 04:12:58.052889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.922 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:22:05.180 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:22:05.438 [2024-10-13 04:12:58.445288] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:05.438 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:22:05.696 [2024-10-13 04:12:58.649653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:05.696 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:05.955 Fill FTL, iteration 1 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77573 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77573 /var/tmp/spdk.tgt.sock 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77573 ']' 00:22:05.955 04:12:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:22:05.955 04:12:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:22:05.955 04:12:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:22:05.955 04:12:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:22:05.955 04:12:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.955 04:12:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.955 [2024-10-13 04:12:59.077444] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:05.955 [2024-10-13 04:12:59.077565] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77573 ] 00:22:06.214 [2024-10-13 04:12:59.228057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.214 [2024-10-13 04:12:59.326228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.780 04:12:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.780 04:12:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:06.780 04:12:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:22:07.038 ftln1 00:22:07.038 04:13:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:22:07.038 04:13:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77573 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77573 ']' 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77573 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77573 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:07.296 killing process with pid 77573 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77573' 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77573 00:22:07.296 04:13:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77573 00:22:08.672 04:13:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:22:08.672 04:13:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:08.930 [2024-10-13 04:13:01.877953] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:08.930 [2024-10-13 04:13:01.878075] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77615 ] 00:22:08.930 [2024-10-13 04:13:02.028305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.188 [2024-10-13 04:13:02.133474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.579  [2024-10-13T04:13:04.702Z] Copying: 215/1024 [MB] (215 MBps) [2024-10-13T04:13:05.636Z] Copying: 457/1024 [MB] (242 MBps) [2024-10-13T04:13:06.568Z] Copying: 718/1024 [MB] (261 MBps) [2024-10-13T04:13:06.826Z] Copying: 976/1024 [MB] (258 MBps) [2024-10-13T04:13:07.391Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:22:14.231 00:22:14.231 Calculate MD5 checksum, iteration 1 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:14.231 04:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:14.231 [2024-10-13 04:13:07.320850] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:14.231 [2024-10-13 04:13:07.321003] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77677 ] 00:22:14.489 [2024-10-13 04:13:07.483064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.489 [2024-10-13 04:13:07.560551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.862  [2024-10-13T04:13:09.588Z] Copying: 684/1024 [MB] (684 MBps) [2024-10-13T04:13:09.847Z] Copying: 1024/1024 [MB] (average 685 MBps) 00:22:16.687 00:22:16.687 04:13:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:16.687 04:13:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:19.217 Fill FTL, iteration 2 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=74a6463a05be3cc411e2f08d37e2a7a7 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:19.217 04:13:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:19.217 [2024-10-13 04:13:11.854710] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:19.218 [2024-10-13 04:13:11.854798] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77727 ] 00:22:19.218 [2024-10-13 04:13:11.995885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.218 [2024-10-13 04:13:12.092965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.593  [2024-10-13T04:13:14.688Z] Copying: 213/1024 [MB] (213 MBps) [2024-10-13T04:13:15.634Z] Copying: 471/1024 [MB] (258 MBps) [2024-10-13T04:13:16.617Z] Copying: 725/1024 [MB] (254 MBps) [2024-10-13T04:13:16.618Z] Copying: 985/1024 [MB] (260 MBps) [2024-10-13T04:13:17.185Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:22:24.025 00:22:24.025 Calculate MD5 checksum, iteration 2 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:24.025 04:13:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:24.284 [2024-10-13 04:13:17.224689] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:24.284 [2024-10-13 04:13:17.224963] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77789 ] 00:22:24.284 [2024-10-13 04:13:17.373449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.543 [2024-10-13 04:13:17.451786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.916  [2024-10-13T04:13:19.643Z] Copying: 655/1024 [MB] (655 MBps) [2024-10-13T04:13:20.579Z] Copying: 1024/1024 [MB] (average 656 MBps) 00:22:27.419 00:22:27.419 04:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:27.419 04:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:29.950 04:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:29.950 04:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e2b8f916ec557a1f0a8434093d734cf8 00:22:29.950 04:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:29.950 04:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:29.950 04:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:29.950 [2024-10-13 04:13:22.703108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:29.950 [2024-10-13 04:13:22.703146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:29.950 [2024-10-13 04:13:22.703158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:29.950 [2024-10-13 04:13:22.703165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:29.950 [2024-10-13 04:13:22.703183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:29.950 [2024-10-13 04:13:22.703190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:29.950 [2024-10-13 04:13:22.703197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:29.950 [2024-10-13 04:13:22.703202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:29.950 [2024-10-13 04:13:22.703219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:29.950 [2024-10-13 04:13:22.703226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:29.950 [2024-10-13 04:13:22.703232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:29.950 [2024-10-13 04:13:22.703238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:29.950 [2024-10-13 04:13:22.703284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.169 ms, result 0 00:22:29.950 true 00:22:29.950 04:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:29.950 { 00:22:29.950 "name": "ftl", 00:22:29.950 "properties": [ 00:22:29.950 { 00:22:29.950 "name": "superblock_version", 00:22:29.950 "value": 5, 00:22:29.950 "read-only": true 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "name": "base_device", 00:22:29.950 "bands": [ 00:22:29.950 { 00:22:29.950 "id": 0, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 1, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 2, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 3, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 4, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 5, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 6, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 7, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 8, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 9, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 10, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 11, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 12, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 13, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 14, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 15, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 16, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 17, 00:22:29.950 "state": "FREE", 00:22:29.950 "validity": 0.0 00:22:29.950 } 00:22:29.950 ], 00:22:29.950 "read-only": true 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "name": "cache_device", 00:22:29.950 "type": "bdev", 00:22:29.950 "chunks": [ 00:22:29.950 { 00:22:29.950 "id": 0, 00:22:29.950 "state": "INACTIVE", 00:22:29.950 "utilization": 0.0 00:22:29.950 }, 00:22:29.950 { 00:22:29.950 "id": 1, 00:22:29.950 "state": "CLOSED", 00:22:29.951 "utilization": 1.0 00:22:29.951 }, 00:22:29.951 { 00:22:29.951 "id": 2, 00:22:29.951 "state": "CLOSED", 00:22:29.951 "utilization": 1.0 00:22:29.951 }, 00:22:29.951 { 00:22:29.951 "id": 3, 00:22:29.951 "state": "OPEN", 00:22:29.951 "utilization": 0.001953125 00:22:29.951 }, 00:22:29.951 { 00:22:29.951 "id": 4, 00:22:29.951 "state": "OPEN", 00:22:29.951 "utilization": 0.0 00:22:29.951 } 00:22:29.951 ], 00:22:29.951 "read-only": true 00:22:29.951 }, 00:22:29.951 { 00:22:29.951 "name": "verbose_mode", 00:22:29.951 "value": true, 00:22:29.951 "unit": "", 00:22:29.951 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:29.951 }, 00:22:29.951 { 00:22:29.951 "name": "prep_upgrade_on_shutdown", 00:22:29.951 "value": false, 00:22:29.951 "unit": "", 00:22:29.951 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:29.951 } 00:22:29.951 ] 00:22:29.951 } 00:22:29.951 04:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:29.951 [2024-10-13 04:13:23.021043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:29.951 [2024-10-13 04:13:23.021083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:29.951 [2024-10-13 04:13:23.021093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:29.951 [2024-10-13 04:13:23.021099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:29.951 [2024-10-13 04:13:23.021117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:29.951 [2024-10-13 04:13:23.021124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:29.951 [2024-10-13 04:13:23.021129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:29.951 [2024-10-13 04:13:23.021135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:29.951 [2024-10-13 04:13:23.021150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:29.951 [2024-10-13 04:13:23.021156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:29.951 [2024-10-13 04:13:23.021162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:29.951 [2024-10-13 04:13:23.021168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:29.951 [2024-10-13 04:13:23.021212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.163 ms, result 0 00:22:29.951 true 00:22:29.951 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:29.951 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:29.951 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:30.209 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:30.209 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:30.209 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:30.209 [2024-10-13 04:13:23.345307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.209 [2024-10-13 04:13:23.345342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:30.209 [2024-10-13 04:13:23.345350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:30.210 [2024-10-13 04:13:23.345356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.210 [2024-10-13 04:13:23.345372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.210 [2024-10-13 04:13:23.345379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:30.210 [2024-10-13 04:13:23.345384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:30.210 [2024-10-13 04:13:23.345390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.210 [2024-10-13 04:13:23.345405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:30.210 [2024-10-13 04:13:23.345410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:30.210 [2024-10-13 04:13:23.345417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:30.210 [2024-10-13 04:13:23.345422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:30.210 [2024-10-13 04:13:23.345464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.148 ms, result 0 00:22:30.210 true 00:22:30.210 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:30.468 { 00:22:30.468 "name": "ftl", 00:22:30.468 "properties": [ 00:22:30.468 { 00:22:30.468 "name": "superblock_version", 00:22:30.468 "value": 5, 00:22:30.468 "read-only": true 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "name": "base_device", 00:22:30.468 "bands": [ 00:22:30.468 { 00:22:30.468 "id": 0, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 1, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 2, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 3, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 4, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 5, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 6, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 7, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 8, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 9, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.468 }, 00:22:30.468 { 00:22:30.468 "id": 10, 00:22:30.468 "state": "FREE", 00:22:30.468 "validity": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 11, 00:22:30.469 "state": "FREE", 00:22:30.469 "validity": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 12, 00:22:30.469 "state": "FREE", 00:22:30.469 "validity": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 13, 00:22:30.469 "state": "FREE", 00:22:30.469 "validity": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 14, 00:22:30.469 "state": "FREE", 00:22:30.469 "validity": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 15, 00:22:30.469 "state": "FREE", 00:22:30.469 "validity": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 16, 00:22:30.469 "state": "FREE", 00:22:30.469 "validity": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 17, 00:22:30.469 "state": "FREE", 00:22:30.469 "validity": 0.0 00:22:30.469 } 00:22:30.469 ], 00:22:30.469 "read-only": true 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "name": "cache_device", 00:22:30.469 "type": "bdev", 00:22:30.469 "chunks": [ 00:22:30.469 { 00:22:30.469 "id": 0, 00:22:30.469 "state": "INACTIVE", 00:22:30.469 "utilization": 0.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 1, 00:22:30.469 "state": "CLOSED", 00:22:30.469 "utilization": 1.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 2, 00:22:30.469 "state": "CLOSED", 00:22:30.469 "utilization": 1.0 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 3, 00:22:30.469 "state": "OPEN", 00:22:30.469 "utilization": 0.001953125 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "id": 4, 00:22:30.469 "state": "OPEN", 00:22:30.469 "utilization": 0.0 00:22:30.469 } 00:22:30.469 ], 00:22:30.469 "read-only": true 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "name": "verbose_mode", 00:22:30.469 "value": true, 00:22:30.469 "unit": "", 00:22:30.469 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:30.469 }, 00:22:30.469 { 00:22:30.469 "name": "prep_upgrade_on_shutdown", 00:22:30.469 "value": true, 00:22:30.469 "unit": "", 00:22:30.469 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:30.469 } 00:22:30.469 ] 00:22:30.469 } 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77459 ]] 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77459 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77459 ']' 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77459 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77459 00:22:30.469 killing process with pid 77459 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77459' 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77459 00:22:30.469 04:13:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77459 00:22:31.035 [2024-10-13 04:13:24.078807] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:22:31.035 [2024-10-13 04:13:24.088906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:31.035 [2024-10-13 04:13:24.088937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:31.035 [2024-10-13 04:13:24.088947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:31.035 [2024-10-13 04:13:24.088953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:31.035 [2024-10-13 04:13:24.088970] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:31.035 [2024-10-13 04:13:24.091052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:31.035 [2024-10-13 04:13:24.091072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:31.035 [2024-10-13 04:13:24.091081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.072 ms 00:22:31.035 [2024-10-13 04:13:24.091088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.535564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.535626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:37.595 [2024-10-13 04:13:30.535640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6444.433 ms 00:22:37.595 [2024-10-13 04:13:30.535649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.536960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.536975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:37.595 [2024-10-13 04:13:30.536988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:22:37.595 [2024-10-13 04:13:30.536996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.538113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.538137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:22:37.595 [2024-10-13 04:13:30.538145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.092 ms 00:22:37.595 [2024-10-13 04:13:30.538152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.547917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.547950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:37.595 [2024-10-13 04:13:30.547959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.734 ms 00:22:37.595 [2024-10-13 04:13:30.547967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.554456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.554489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:37.595 [2024-10-13 04:13:30.554499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.458 ms 00:22:37.595 [2024-10-13 04:13:30.554506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.554582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.554591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:37.595 [2024-10-13 04:13:30.554600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:22:37.595 [2024-10-13 04:13:30.554607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.563807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.563845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:22:37.595 [2024-10-13 04:13:30.563854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.166 ms 00:22:37.595 [2024-10-13 04:13:30.563861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.573300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.573426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:22:37.595 [2024-10-13 04:13:30.573441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.410 ms 00:22:37.595 [2024-10-13 04:13:30.573447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.582305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.582334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:37.595 [2024-10-13 04:13:30.582342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.830 ms 00:22:37.595 [2024-10-13 04:13:30.582349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.591120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.591149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:37.595 [2024-10-13 04:13:30.591158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.705 ms 00:22:37.595 [2024-10-13 04:13:30.591165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.591192] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:37.595 [2024-10-13 04:13:30.591209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:37.595 [2024-10-13 04:13:30.591219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:37.595 [2024-10-13 04:13:30.591232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:37.595 [2024-10-13 04:13:30.591240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:37.595 [2024-10-13 04:13:30.591351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:37.595 [2024-10-13 04:13:30.591358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 32b67c36-f0ef-4eb8-a43b-d7ee19b6a02f 00:22:37.595 [2024-10-13 04:13:30.591365] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:37.595 [2024-10-13 04:13:30.591372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:22:37.595 [2024-10-13 04:13:30.591379] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:22:37.595 [2024-10-13 04:13:30.591386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:22:37.595 [2024-10-13 04:13:30.591393] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:37.595 [2024-10-13 04:13:30.591400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:37.595 [2024-10-13 04:13:30.591407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:37.595 [2024-10-13 04:13:30.591413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:37.595 [2024-10-13 04:13:30.591419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:37.595 [2024-10-13 04:13:30.591427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.591436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:37.595 [2024-10-13 04:13:30.591447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:22:37.595 [2024-10-13 04:13:30.591454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.603802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.603919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:37.595 [2024-10-13 04:13:30.603934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.330 ms 00:22:37.595 [2024-10-13 04:13:30.603941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.604285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:37.595 [2024-10-13 04:13:30.604305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:37.595 [2024-10-13 04:13:30.604314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:22:37.595 [2024-10-13 04:13:30.604321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.645839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.595 [2024-10-13 04:13:30.645872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:37.595 [2024-10-13 04:13:30.645883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.595 [2024-10-13 04:13:30.645891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.645919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.595 [2024-10-13 04:13:30.645932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:37.595 [2024-10-13 04:13:30.645939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.595 [2024-10-13 04:13:30.645947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.646017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.595 [2024-10-13 04:13:30.646027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:37.595 [2024-10-13 04:13:30.646035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.595 [2024-10-13 04:13:30.646042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.646057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.595 [2024-10-13 04:13:30.646065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:37.595 [2024-10-13 04:13:30.646076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.595 [2024-10-13 04:13:30.646083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.595 [2024-10-13 04:13:30.719348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.595 [2024-10-13 04:13:30.719387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:37.595 [2024-10-13 04:13:30.719396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.595 [2024-10-13 04:13:30.719402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.766872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.854 [2024-10-13 04:13:30.766909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:37.854 [2024-10-13 04:13:30.766922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.854 [2024-10-13 04:13:30.766929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.766984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.854 [2024-10-13 04:13:30.766991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:37.854 [2024-10-13 04:13:30.766997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.854 [2024-10-13 04:13:30.767003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.767046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.854 [2024-10-13 04:13:30.767054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:37.854 [2024-10-13 04:13:30.767060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.854 [2024-10-13 04:13:30.767067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.767137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.854 [2024-10-13 04:13:30.767144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:37.854 [2024-10-13 04:13:30.767150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.854 [2024-10-13 04:13:30.767156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.767178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.854 [2024-10-13 04:13:30.767185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:37.854 [2024-10-13 04:13:30.767191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.854 [2024-10-13 04:13:30.767197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.767227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.854 [2024-10-13 04:13:30.767234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:37.854 [2024-10-13 04:13:30.767239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.854 [2024-10-13 04:13:30.767245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.767280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:37.854 [2024-10-13 04:13:30.767287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:37.854 [2024-10-13 04:13:30.767293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:37.854 [2024-10-13 04:13:30.767301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:37.854 [2024-10-13 04:13:30.767389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 6678.444 ms, result 0 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77959 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77959 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77959 ']' 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.078 04:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:42.078 [2024-10-13 04:13:34.957467] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:42.078 [2024-10-13 04:13:34.958009] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77959 ] 00:22:42.078 [2024-10-13 04:13:35.108841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.078 [2024-10-13 04:13:35.209476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.013 [2024-10-13 04:13:35.899880] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:43.013 [2024-10-13 04:13:35.900161] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:43.013 [2024-10-13 04:13:36.044142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.044197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:43.013 [2024-10-13 04:13:36.044209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:43.013 [2024-10-13 04:13:36.044218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.044267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.044279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:43.013 [2024-10-13 04:13:36.044287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:22:43.013 [2024-10-13 04:13:36.044295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.044320] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:43.013 [2024-10-13 04:13:36.045033] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:43.013 [2024-10-13 04:13:36.045050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.045057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:43.013 [2024-10-13 04:13:36.045066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.739 ms 00:22:43.013 [2024-10-13 04:13:36.045073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.046135] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:43.013 [2024-10-13 04:13:36.058414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.058448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:43.013 [2024-10-13 04:13:36.058459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.281 ms 00:22:43.013 [2024-10-13 04:13:36.058468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.058527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.058536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:43.013 [2024-10-13 04:13:36.058544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:22:43.013 [2024-10-13 04:13:36.058551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.063307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.063467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:43.013 [2024-10-13 04:13:36.063483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.676 ms 00:22:43.013 [2024-10-13 04:13:36.063496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.063555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.063565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:43.013 [2024-10-13 04:13:36.063573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:22:43.013 [2024-10-13 04:13:36.063580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.063648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.063659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:43.013 [2024-10-13 04:13:36.063667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:22:43.013 [2024-10-13 04:13:36.063674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.063699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:43.013 [2024-10-13 04:13:36.066970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.066998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:43.013 [2024-10-13 04:13:36.067009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.276 ms 00:22:43.013 [2024-10-13 04:13:36.067016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.013 [2024-10-13 04:13:36.067040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.013 [2024-10-13 04:13:36.067051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:43.013 [2024-10-13 04:13:36.067058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:43.013 [2024-10-13 04:13:36.067066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.014 [2024-10-13 04:13:36.067087] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:43.014 [2024-10-13 04:13:36.067104] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:22:43.014 [2024-10-13 04:13:36.067138] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:43.014 [2024-10-13 04:13:36.067155] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:22:43.014 [2024-10-13 04:13:36.067256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:43.014 [2024-10-13 04:13:36.067266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:43.014 [2024-10-13 04:13:36.067276] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:43.014 [2024-10-13 04:13:36.067286] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067294] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067302] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:43.014 [2024-10-13 04:13:36.067309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:43.014 [2024-10-13 04:13:36.067317] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:43.014 [2024-10-13 04:13:36.067326] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:43.014 [2024-10-13 04:13:36.067333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.014 [2024-10-13 04:13:36.067340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:43.014 [2024-10-13 04:13:36.067348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:22:43.014 [2024-10-13 04:13:36.067354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.014 [2024-10-13 04:13:36.067438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.014 [2024-10-13 04:13:36.067446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:43.014 [2024-10-13 04:13:36.067454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:22:43.014 [2024-10-13 04:13:36.067461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.014 [2024-10-13 04:13:36.067562] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:43.014 [2024-10-13 04:13:36.067571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:43.014 [2024-10-13 04:13:36.067579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:43.014 [2024-10-13 04:13:36.067601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:43.014 [2024-10-13 04:13:36.067639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:43.014 [2024-10-13 04:13:36.067649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:43.014 [2024-10-13 04:13:36.067655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:43.014 [2024-10-13 04:13:36.067668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:43.014 [2024-10-13 04:13:36.067675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:43.014 [2024-10-13 04:13:36.067688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:43.014 [2024-10-13 04:13:36.067694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:43.014 [2024-10-13 04:13:36.067709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:43.014 [2024-10-13 04:13:36.067716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:43.014 [2024-10-13 04:13:36.067729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:43.014 [2024-10-13 04:13:36.067735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:43.014 [2024-10-13 04:13:36.067748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:43.014 [2024-10-13 04:13:36.067755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:43.014 [2024-10-13 04:13:36.067774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:43.014 [2024-10-13 04:13:36.067780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:43.014 [2024-10-13 04:13:36.067793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:43.014 [2024-10-13 04:13:36.067799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:43.014 [2024-10-13 04:13:36.067812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:43.014 [2024-10-13 04:13:36.067818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:43.014 [2024-10-13 04:13:36.067831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:43.014 [2024-10-13 04:13:36.067850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:43.014 [2024-10-13 04:13:36.067869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:43.014 [2024-10-13 04:13:36.067875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067881] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:43.014 [2024-10-13 04:13:36.067888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:43.014 [2024-10-13 04:13:36.067895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:43.014 [2024-10-13 04:13:36.067909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:43.014 [2024-10-13 04:13:36.067917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:43.014 [2024-10-13 04:13:36.067924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:43.014 [2024-10-13 04:13:36.067931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:43.014 [2024-10-13 04:13:36.067937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:43.014 [2024-10-13 04:13:36.067944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:43.014 [2024-10-13 04:13:36.067951] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:43.014 [2024-10-13 04:13:36.067960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.067970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:43.014 [2024-10-13 04:13:36.067978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.067985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.067992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:43.014 [2024-10-13 04:13:36.068000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:43.014 [2024-10-13 04:13:36.068007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:43.014 [2024-10-13 04:13:36.068014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:43.014 [2024-10-13 04:13:36.068021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:43.014 [2024-10-13 04:13:36.068069] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:43.014 [2024-10-13 04:13:36.068076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:43.014 [2024-10-13 04:13:36.068091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:43.014 [2024-10-13 04:13:36.068098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:43.014 [2024-10-13 04:13:36.068122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:43.014 [2024-10-13 04:13:36.068129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:43.014 [2024-10-13 04:13:36.068136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:43.014 [2024-10-13 04:13:36.068144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.635 ms 00:22:43.014 [2024-10-13 04:13:36.068151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:43.014 [2024-10-13 04:13:36.068206] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:43.014 [2024-10-13 04:13:36.068223] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:45.556 [2024-10-13 04:13:38.392043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.392086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:45.556 [2024-10-13 04:13:38.392098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2323.827 ms 00:22:45.556 [2024-10-13 04:13:38.392111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.412579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.412627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:45.556 [2024-10-13 04:13:38.412637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.297 ms 00:22:45.556 [2024-10-13 04:13:38.412643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.412703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.412712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:45.556 [2024-10-13 04:13:38.412718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:22:45.556 [2024-10-13 04:13:38.412728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.436601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.436641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:45.556 [2024-10-13 04:13:38.436649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.843 ms 00:22:45.556 [2024-10-13 04:13:38.436655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.436678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.436687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:45.556 [2024-10-13 04:13:38.436694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:45.556 [2024-10-13 04:13:38.436699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.437002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.437015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:45.556 [2024-10-13 04:13:38.437022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:22:45.556 [2024-10-13 04:13:38.437028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.437058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.437064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:45.556 [2024-10-13 04:13:38.437074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:45.556 [2024-10-13 04:13:38.437079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.448385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.448411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:45.556 [2024-10-13 04:13:38.448419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.289 ms 00:22:45.556 [2024-10-13 04:13:38.448425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.457966] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:45.556 [2024-10-13 04:13:38.457995] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:45.556 [2024-10-13 04:13:38.458004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.458011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:22:45.556 [2024-10-13 04:13:38.458018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.504 ms 00:22:45.556 [2024-10-13 04:13:38.458023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.468308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.468336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:22:45.556 [2024-10-13 04:13:38.468345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.255 ms 00:22:45.556 [2024-10-13 04:13:38.468351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.476997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.477021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:22:45.556 [2024-10-13 04:13:38.477028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.615 ms 00:22:45.556 [2024-10-13 04:13:38.477034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.485350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.485375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:22:45.556 [2024-10-13 04:13:38.485383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.289 ms 00:22:45.556 [2024-10-13 04:13:38.485388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.485872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.485889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:45.556 [2024-10-13 04:13:38.485896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:22:45.556 [2024-10-13 04:13:38.485903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.539884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.539926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:45.556 [2024-10-13 04:13:38.539941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.966 ms 00:22:45.556 [2024-10-13 04:13:38.539947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.547726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:45.556 [2024-10-13 04:13:38.548260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.548284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:45.556 [2024-10-13 04:13:38.548292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.277 ms 00:22:45.556 [2024-10-13 04:13:38.548298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.548360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.548368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:22:45.556 [2024-10-13 04:13:38.548375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:45.556 [2024-10-13 04:13:38.548386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.548422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.548429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:45.556 [2024-10-13 04:13:38.548436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:45.556 [2024-10-13 04:13:38.548442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.548458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.548464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:45.556 [2024-10-13 04:13:38.548471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:45.556 [2024-10-13 04:13:38.548476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.548502] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:45.556 [2024-10-13 04:13:38.548511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.548517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:45.556 [2024-10-13 04:13:38.548522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:22:45.556 [2024-10-13 04:13:38.548528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.565648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.565674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:45.556 [2024-10-13 04:13:38.565682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.105 ms 00:22:45.556 [2024-10-13 04:13:38.565691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.565748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:45.556 [2024-10-13 04:13:38.565755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:45.556 [2024-10-13 04:13:38.565761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:22:45.556 [2024-10-13 04:13:38.565767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:45.556 [2024-10-13 04:13:38.566557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2522.114 ms, result 0 00:22:45.557 [2024-10-13 04:13:38.581925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.557 [2024-10-13 04:13:38.597930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:45.557 [2024-10-13 04:13:38.606029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:46.124 04:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.124 04:13:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:46.124 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:46.124 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:22:46.124 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:46.383 [2024-10-13 04:13:39.330505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:46.383 [2024-10-13 04:13:39.330539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:46.383 [2024-10-13 04:13:39.330550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:46.383 [2024-10-13 04:13:39.330556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:46.383 [2024-10-13 04:13:39.330574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:46.383 [2024-10-13 04:13:39.330583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:46.383 [2024-10-13 04:13:39.330590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:46.383 [2024-10-13 04:13:39.330596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:46.383 [2024-10-13 04:13:39.330611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:46.383 [2024-10-13 04:13:39.330631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:46.383 [2024-10-13 04:13:39.330637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:46.383 [2024-10-13 04:13:39.330643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:46.383 [2024-10-13 04:13:39.330687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.176 ms, result 0 00:22:46.383 true 00:22:46.383 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:46.383 { 00:22:46.383 "name": "ftl", 00:22:46.383 "properties": [ 00:22:46.383 { 00:22:46.383 "name": "superblock_version", 00:22:46.383 "value": 5, 00:22:46.383 "read-only": true 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "name": "base_device", 00:22:46.383 "bands": [ 00:22:46.383 { 00:22:46.383 "id": 0, 00:22:46.383 "state": "CLOSED", 00:22:46.383 "validity": 1.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 1, 00:22:46.383 "state": "CLOSED", 00:22:46.383 "validity": 1.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 2, 00:22:46.383 "state": "CLOSED", 00:22:46.383 "validity": 0.007843137254901933 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 3, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 4, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 5, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 6, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 7, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 8, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 9, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 10, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 11, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 12, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 13, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 14, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 15, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 16, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 17, 00:22:46.383 "state": "FREE", 00:22:46.383 "validity": 0.0 00:22:46.383 } 00:22:46.383 ], 00:22:46.383 "read-only": true 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "name": "cache_device", 00:22:46.383 "type": "bdev", 00:22:46.383 "chunks": [ 00:22:46.383 { 00:22:46.383 "id": 0, 00:22:46.383 "state": "INACTIVE", 00:22:46.383 "utilization": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 1, 00:22:46.383 "state": "OPEN", 00:22:46.383 "utilization": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 2, 00:22:46.383 "state": "OPEN", 00:22:46.383 "utilization": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 3, 00:22:46.383 "state": "FREE", 00:22:46.383 "utilization": 0.0 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "id": 4, 00:22:46.383 "state": "FREE", 00:22:46.383 "utilization": 0.0 00:22:46.383 } 00:22:46.383 ], 00:22:46.383 "read-only": true 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "name": "verbose_mode", 00:22:46.383 "value": true, 00:22:46.383 "unit": "", 00:22:46.383 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:46.383 }, 00:22:46.383 { 00:22:46.383 "name": "prep_upgrade_on_shutdown", 00:22:46.383 "value": false, 00:22:46.383 "unit": "", 00:22:46.383 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:46.383 } 00:22:46.383 ] 00:22:46.383 } 00:22:46.383 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:46.383 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:22:46.383 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:46.642 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:22:46.642 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:22:46.642 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:22:46.642 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:46.642 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:22:46.901 Validate MD5 checksum, iteration 1 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:46.901 04:13:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:46.901 [2024-10-13 04:13:40.016585] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:46.901 [2024-10-13 04:13:40.017320] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78029 ] 00:22:47.160 [2024-10-13 04:13:40.168415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.160 [2024-10-13 04:13:40.266517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.061  [2024-10-13T04:13:42.479Z] Copying: 689/1024 [MB] (689 MBps) [2024-10-13T04:13:43.415Z] Copying: 1024/1024 [MB] (average 690 MBps) 00:22:50.255 00:22:50.255 04:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:50.255 04:13:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:52.786 Validate MD5 checksum, iteration 2 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=74a6463a05be3cc411e2f08d37e2a7a7 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 74a6463a05be3cc411e2f08d37e2a7a7 != \7\4\a\6\4\6\3\a\0\5\b\e\3\c\c\4\1\1\e\2\f\0\8\d\3\7\e\2\a\7\a\7 ]] 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:52.786 04:13:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:52.786 [2024-10-13 04:13:45.553698] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:52.786 [2024-10-13 04:13:45.553969] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78096 ] 00:22:52.786 [2024-10-13 04:13:45.696795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.786 [2024-10-13 04:13:45.793722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.169  [2024-10-13T04:13:47.937Z] Copying: 778/1024 [MB] (778 MBps) [2024-10-13T04:13:48.503Z] Copying: 1024/1024 [MB] (average 755 MBps) 00:22:55.343 00:22:55.343 04:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:55.343 04:13:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e2b8f916ec557a1f0a8434093d734cf8 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e2b8f916ec557a1f0a8434093d734cf8 != \e\2\b\8\f\9\1\6\e\c\5\5\7\a\1\f\0\a\8\4\3\4\0\9\3\d\7\3\4\c\f\8 ]] 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 77959 ]] 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 77959 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78146 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78146 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78146 ']' 00:22:57.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.245 04:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.246 04:13:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.246 [2024-10-13 04:13:50.392320] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:57.246 [2024-10-13 04:13:50.392570] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78146 ] 00:22:57.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 77959 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:22:57.504 [2024-10-13 04:13:50.541936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.504 [2024-10-13 04:13:50.621901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.072 [2024-10-13 04:13:51.188662] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:58.072 [2024-10-13 04:13:51.188887] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:58.331 [2024-10-13 04:13:51.331745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.331787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:58.331 [2024-10-13 04:13:51.331800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:58.331 [2024-10-13 04:13:51.331809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.331 [2024-10-13 04:13:51.331857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.331869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:58.331 [2024-10-13 04:13:51.331877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:22:58.331 [2024-10-13 04:13:51.331885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.331 [2024-10-13 04:13:51.331906] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:58.331 [2024-10-13 04:13:51.332634] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:58.331 [2024-10-13 04:13:51.332656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.332663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:58.331 [2024-10-13 04:13:51.332671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.757 ms 00:22:58.331 [2024-10-13 04:13:51.332678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.331 [2024-10-13 04:13:51.332983] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:58.331 [2024-10-13 04:13:51.348528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.348558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:58.331 [2024-10-13 04:13:51.348571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.547 ms 00:22:58.331 [2024-10-13 04:13:51.348579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.331 [2024-10-13 04:13:51.357219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.357246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:58.331 [2024-10-13 04:13:51.357256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:22:58.331 [2024-10-13 04:13:51.357263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.331 [2024-10-13 04:13:51.357563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.357574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:58.331 [2024-10-13 04:13:51.357582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.226 ms 00:22:58.331 [2024-10-13 04:13:51.357589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.331 [2024-10-13 04:13:51.357650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.357660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:58.331 [2024-10-13 04:13:51.357668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:22:58.331 [2024-10-13 04:13:51.357677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.331 [2024-10-13 04:13:51.357703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.331 [2024-10-13 04:13:51.357712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:58.332 [2024-10-13 04:13:51.357720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:22:58.332 [2024-10-13 04:13:51.357727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.332 [2024-10-13 04:13:51.357746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:58.332 [2024-10-13 04:13:51.360661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.332 [2024-10-13 04:13:51.360688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:58.332 [2024-10-13 04:13:51.360698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.919 ms 00:22:58.332 [2024-10-13 04:13:51.360704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.332 [2024-10-13 04:13:51.360730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.332 [2024-10-13 04:13:51.360737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:58.332 [2024-10-13 04:13:51.360748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:58.332 [2024-10-13 04:13:51.360754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.332 [2024-10-13 04:13:51.360773] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:58.332 [2024-10-13 04:13:51.360790] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:22:58.332 [2024-10-13 04:13:51.360832] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:58.332 [2024-10-13 04:13:51.360847] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:22:58.332 [2024-10-13 04:13:51.360971] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:58.332 [2024-10-13 04:13:51.360986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:58.332 [2024-10-13 04:13:51.360996] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:58.332 [2024-10-13 04:13:51.361005] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361014] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361022] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:58.332 [2024-10-13 04:13:51.361029] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:58.332 [2024-10-13 04:13:51.361036] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:58.332 [2024-10-13 04:13:51.361042] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:58.332 [2024-10-13 04:13:51.361049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.332 [2024-10-13 04:13:51.361057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:58.332 [2024-10-13 04:13:51.361064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:22:58.332 [2024-10-13 04:13:51.361073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.332 [2024-10-13 04:13:51.361158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.332 [2024-10-13 04:13:51.361170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:58.332 [2024-10-13 04:13:51.361178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:22:58.332 [2024-10-13 04:13:51.361190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.332 [2024-10-13 04:13:51.361322] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:58.332 [2024-10-13 04:13:51.361335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:58.332 [2024-10-13 04:13:51.361344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:58.332 [2024-10-13 04:13:51.361367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:58.332 [2024-10-13 04:13:51.361381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:58.332 [2024-10-13 04:13:51.361389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:58.332 [2024-10-13 04:13:51.361397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:58.332 [2024-10-13 04:13:51.361410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:58.332 [2024-10-13 04:13:51.361416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:58.332 [2024-10-13 04:13:51.361429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:58.332 [2024-10-13 04:13:51.361436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:58.332 [2024-10-13 04:13:51.361448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:58.332 [2024-10-13 04:13:51.361455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:58.332 [2024-10-13 04:13:51.361467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:58.332 [2024-10-13 04:13:51.361474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:58.332 [2024-10-13 04:13:51.361487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:58.332 [2024-10-13 04:13:51.361499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:58.332 [2024-10-13 04:13:51.361512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:58.332 [2024-10-13 04:13:51.361518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:58.332 [2024-10-13 04:13:51.361531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:58.332 [2024-10-13 04:13:51.361537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:58.332 [2024-10-13 04:13:51.361549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:58.332 [2024-10-13 04:13:51.361556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:58.332 [2024-10-13 04:13:51.361568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:58.332 [2024-10-13 04:13:51.361587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:58.332 [2024-10-13 04:13:51.361606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:58.332 [2024-10-13 04:13:51.361628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361635] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:58.332 [2024-10-13 04:13:51.361643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:58.332 [2024-10-13 04:13:51.361650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:58.332 [2024-10-13 04:13:51.361664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:58.332 [2024-10-13 04:13:51.361671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:58.332 [2024-10-13 04:13:51.361678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:58.332 [2024-10-13 04:13:51.361684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:58.332 [2024-10-13 04:13:51.361690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:58.332 [2024-10-13 04:13:51.361697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:58.332 [2024-10-13 04:13:51.361705] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:58.332 [2024-10-13 04:13:51.361714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:58.332 [2024-10-13 04:13:51.361729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:58.332 [2024-10-13 04:13:51.361749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:58.332 [2024-10-13 04:13:51.361756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:58.332 [2024-10-13 04:13:51.361763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:58.332 [2024-10-13 04:13:51.361770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:58.332 [2024-10-13 04:13:51.361819] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:58.332 [2024-10-13 04:13:51.361827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.332 [2024-10-13 04:13:51.361834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:58.333 [2024-10-13 04:13:51.361842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:58.333 [2024-10-13 04:13:51.361849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:58.333 [2024-10-13 04:13:51.361856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:58.333 [2024-10-13 04:13:51.361863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.361871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:58.333 [2024-10-13 04:13:51.361878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.623 ms 00:22:58.333 [2024-10-13 04:13:51.361886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.385988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.386111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:58.333 [2024-10-13 04:13:51.386170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.054 ms 00:22:58.333 [2024-10-13 04:13:51.386193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.386247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.386270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:58.333 [2024-10-13 04:13:51.386290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:22:58.333 [2024-10-13 04:13:51.386308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.416433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.416540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:58.333 [2024-10-13 04:13:51.416589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.059 ms 00:22:58.333 [2024-10-13 04:13:51.416611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.416665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.416687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:58.333 [2024-10-13 04:13:51.416707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:58.333 [2024-10-13 04:13:51.416726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.416826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.416858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:58.333 [2024-10-13 04:13:51.416879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:22:58.333 [2024-10-13 04:13:51.416962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.417026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.417047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:58.333 [2024-10-13 04:13:51.417067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:22:58.333 [2024-10-13 04:13:51.417085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.430970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.430998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:58.333 [2024-10-13 04:13:51.431008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.855 ms 00:22:58.333 [2024-10-13 04:13:51.431016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.431121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.431132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:22:58.333 [2024-10-13 04:13:51.431141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:58.333 [2024-10-13 04:13:51.431148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.461003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.461039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:22:58.333 [2024-10-13 04:13:51.461052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.837 ms 00:22:58.333 [2024-10-13 04:13:51.461060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.333 [2024-10-13 04:13:51.470200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.333 [2024-10-13 04:13:51.470315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:58.333 [2024-10-13 04:13:51.470331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.510 ms 00:22:58.333 [2024-10-13 04:13:51.470339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.592 [2024-10-13 04:13:51.524012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.592 [2024-10-13 04:13:51.524053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:58.592 [2024-10-13 04:13:51.524065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.613 ms 00:22:58.592 [2024-10-13 04:13:51.524077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.592 [2024-10-13 04:13:51.524205] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:22:58.592 [2024-10-13 04:13:51.524297] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:22:58.592 [2024-10-13 04:13:51.524382] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:22:58.592 [2024-10-13 04:13:51.524469] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:22:58.592 [2024-10-13 04:13:51.524477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.592 [2024-10-13 04:13:51.524485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:22:58.592 [2024-10-13 04:13:51.524493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.354 ms 00:22:58.592 [2024-10-13 04:13:51.524501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.592 [2024-10-13 04:13:51.524555] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:22:58.592 [2024-10-13 04:13:51.524567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.592 [2024-10-13 04:13:51.524574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:22:58.592 [2024-10-13 04:13:51.524582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:22:58.592 [2024-10-13 04:13:51.524589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.592 [2024-10-13 04:13:51.539158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.592 [2024-10-13 04:13:51.539189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:22:58.592 [2024-10-13 04:13:51.539200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.546 ms 00:22:58.592 [2024-10-13 04:13:51.539210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.592 [2024-10-13 04:13:51.547742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.592 [2024-10-13 04:13:51.547848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:22:58.592 [2024-10-13 04:13:51.547863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:58.592 [2024-10-13 04:13:51.547871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:58.592 [2024-10-13 04:13:51.547971] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:22:58.592 [2024-10-13 04:13:51.548091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:58.592 [2024-10-13 04:13:51.548112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:22:58.592 [2024-10-13 04:13:51.548120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.122 ms 00:22:58.592 [2024-10-13 04:13:51.548130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.161 [2024-10-13 04:13:52.094552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.161 [2024-10-13 04:13:52.094641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:22:59.161 [2024-10-13 04:13:52.094656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 545.622 ms 00:22:59.161 [2024-10-13 04:13:52.094664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.161 [2024-10-13 04:13:52.099055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.161 [2024-10-13 04:13:52.099092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:22:59.161 [2024-10-13 04:13:52.099102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.427 ms 00:22:59.161 [2024-10-13 04:13:52.099110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.161 [2024-10-13 04:13:52.100030] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:22:59.161 [2024-10-13 04:13:52.100066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.161 [2024-10-13 04:13:52.100080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:22:59.161 [2024-10-13 04:13:52.100089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.928 ms 00:22:59.161 [2024-10-13 04:13:52.100097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.161 [2024-10-13 04:13:52.100138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.161 [2024-10-13 04:13:52.100147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:22:59.161 [2024-10-13 04:13:52.100155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:59.161 [2024-10-13 04:13:52.100163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.161 [2024-10-13 04:13:52.100197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 552.227 ms, result 0 00:22:59.161 [2024-10-13 04:13:52.100236] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:22:59.161 [2024-10-13 04:13:52.100319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.161 [2024-10-13 04:13:52.100330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:22:59.161 [2024-10-13 04:13:52.100338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:22:59.161 [2024-10-13 04:13:52.100346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.677375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.677421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:22:59.733 [2024-10-13 04:13:52.677434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 576.100 ms 00:22:59.733 [2024-10-13 04:13:52.677442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.681474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.681626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:22:59.733 [2024-10-13 04:13:52.681643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:22:59.733 [2024-10-13 04:13:52.681651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.682131] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:22:59.733 [2024-10-13 04:13:52.682159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.682167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:22:59.733 [2024-10-13 04:13:52.682175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.477 ms 00:22:59.733 [2024-10-13 04:13:52.682182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.682209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.682217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:22:59.733 [2024-10-13 04:13:52.682225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:59.733 [2024-10-13 04:13:52.682232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.682266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 582.024 ms, result 0 00:22:59.733 [2024-10-13 04:13:52.682305] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:59.733 [2024-10-13 04:13:52.682315] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:59.733 [2024-10-13 04:13:52.682324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.682331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:22:59.733 [2024-10-13 04:13:52.682339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1134.370 ms 00:22:59.733 [2024-10-13 04:13:52.682346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.682375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.682383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:22:59.733 [2024-10-13 04:13:52.682390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:59.733 [2024-10-13 04:13:52.682398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.693398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:59.733 [2024-10-13 04:13:52.693600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.693625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:59.733 [2024-10-13 04:13:52.693635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.185 ms 00:22:59.733 [2024-10-13 04:13:52.693643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.694314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.694332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:22:59.733 [2024-10-13 04:13:52.694341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.604 ms 00:22:59.733 [2024-10-13 04:13:52.694348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.696583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.696700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:22:59.733 [2024-10-13 04:13:52.696713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.216 ms 00:22:59.733 [2024-10-13 04:13:52.696721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.696760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.696768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:22:59.733 [2024-10-13 04:13:52.696776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:59.733 [2024-10-13 04:13:52.696783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.696887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.696899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:59.733 [2024-10-13 04:13:52.696907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:22:59.733 [2024-10-13 04:13:52.696914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.696933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.696941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:59.733 [2024-10-13 04:13:52.696949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:59.733 [2024-10-13 04:13:52.696955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.696982] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:59.733 [2024-10-13 04:13:52.696992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.696999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:59.733 [2024-10-13 04:13:52.697008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:22:59.733 [2024-10-13 04:13:52.697016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.697067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.733 [2024-10-13 04:13:52.697076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:59.733 [2024-10-13 04:13:52.697083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:22:59.733 [2024-10-13 04:13:52.697091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.733 [2024-10-13 04:13:52.697933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1365.751 ms, result 0 00:22:59.733 [2024-10-13 04:13:52.710289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.733 [2024-10-13 04:13:52.726290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:59.733 [2024-10-13 04:13:52.734511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:59.733 Validate MD5 checksum, iteration 1 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:59.733 04:13:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:59.994 [2024-10-13 04:13:52.954013] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:22:59.994 [2024-10-13 04:13:52.954277] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78186 ] 00:22:59.994 [2024-10-13 04:13:53.102868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.254 [2024-10-13 04:13:53.177572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.636  [2024-10-13T04:13:55.056Z] Copying: 701/1024 [MB] (701 MBps) [2024-10-13T04:14:00.333Z] Copying: 1024/1024 [MB] (average 704 MBps) 00:23:07.173 00:23:07.173 04:13:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:07.173 04:13:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:08.552 Validate MD5 checksum, iteration 2 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=74a6463a05be3cc411e2f08d37e2a7a7 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 74a6463a05be3cc411e2f08d37e2a7a7 != \7\4\a\6\4\6\3\a\0\5\b\e\3\c\c\4\1\1\e\2\f\0\8\d\3\7\e\2\a\7\a\7 ]] 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:08.552 04:14:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:08.552 [2024-10-13 04:14:01.688793] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:08.552 [2024-10-13 04:14:01.688905] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78275 ] 00:23:08.823 [2024-10-13 04:14:01.834835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.823 [2024-10-13 04:14:01.911110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.212  [2024-10-13T04:14:03.942Z] Copying: 680/1024 [MB] (680 MBps) [2024-10-13T04:14:04.883Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:23:11.723 00:23:11.723 04:14:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:11.723 04:14:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e2b8f916ec557a1f0a8434093d734cf8 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e2b8f916ec557a1f0a8434093d734cf8 != \e\2\b\8\f\9\1\6\e\c\5\5\7\a\1\f\0\a\8\4\3\4\0\9\3\d\7\3\4\c\f\8 ]] 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78146 ]] 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78146 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78146 ']' 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78146 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78146 00:23:13.637 killing process with pid 78146 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78146' 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78146 00:23:13.637 04:14:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78146 00:23:13.897 [2024-10-13 04:14:07.033049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:13.897 [2024-10-13 04:14:07.043924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:13.897 [2024-10-13 04:14:07.043959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:13.897 [2024-10-13 04:14:07.043970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:13.897 [2024-10-13 04:14:07.043977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:13.897 [2024-10-13 04:14:07.043994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:13.897 [2024-10-13 04:14:07.046177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:13.897 [2024-10-13 04:14:07.046202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:13.897 [2024-10-13 04:14:07.046210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.173 ms 00:23:13.897 [2024-10-13 04:14:07.046217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:13.897 [2024-10-13 04:14:07.046409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:13.897 [2024-10-13 04:14:07.046421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:13.897 [2024-10-13 04:14:07.046428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.175 ms 00:23:13.897 [2024-10-13 04:14:07.046434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:13.897 [2024-10-13 04:14:07.047451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:13.897 [2024-10-13 04:14:07.047573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:13.897 [2024-10-13 04:14:07.047584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.005 ms 00:23:13.897 [2024-10-13 04:14:07.047591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:13.897 [2024-10-13 04:14:07.048464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:13.897 [2024-10-13 04:14:07.048480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:13.897 [2024-10-13 04:14:07.048491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:23:13.897 [2024-10-13 04:14:07.048497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:13.897 [2024-10-13 04:14:07.055753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:13.897 [2024-10-13 04:14:07.055782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:13.897 [2024-10-13 04:14:07.055789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.230 ms 00:23:13.897 [2024-10-13 04:14:07.055795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.159 [2024-10-13 04:14:07.059894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.159 [2024-10-13 04:14:07.059923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:14.159 [2024-10-13 04:14:07.059931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.071 ms 00:23:14.159 [2024-10-13 04:14:07.059938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.159 [2024-10-13 04:14:07.060008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.159 [2024-10-13 04:14:07.060015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:14.159 [2024-10-13 04:14:07.060023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:23:14.159 [2024-10-13 04:14:07.060028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.159 [2024-10-13 04:14:07.067331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.159 [2024-10-13 04:14:07.067356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:14.159 [2024-10-13 04:14:07.067363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.290 ms 00:23:14.159 [2024-10-13 04:14:07.067368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.159 [2024-10-13 04:14:07.074504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.159 [2024-10-13 04:14:07.074605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:14.159 [2024-10-13 04:14:07.074630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.109 ms 00:23:14.159 [2024-10-13 04:14:07.074636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.159 [2024-10-13 04:14:07.081710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.159 [2024-10-13 04:14:07.081807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:14.160 [2024-10-13 04:14:07.081818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.050 ms 00:23:14.160 [2024-10-13 04:14:07.081823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.089086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.160 [2024-10-13 04:14:07.089180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:14.160 [2024-10-13 04:14:07.089191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.211 ms 00:23:14.160 [2024-10-13 04:14:07.089196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.089219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:14.160 [2024-10-13 04:14:07.089229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:14.160 [2024-10-13 04:14:07.089236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:14.160 [2024-10-13 04:14:07.089242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:14.160 [2024-10-13 04:14:07.089248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:14.160 [2024-10-13 04:14:07.089333] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:14.160 [2024-10-13 04:14:07.089342] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 32b67c36-f0ef-4eb8-a43b-d7ee19b6a02f 00:23:14.160 [2024-10-13 04:14:07.089348] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:14.160 [2024-10-13 04:14:07.089353] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:23:14.160 [2024-10-13 04:14:07.089359] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:23:14.160 [2024-10-13 04:14:07.089364] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:23:14.160 [2024-10-13 04:14:07.089370] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:14.160 [2024-10-13 04:14:07.089375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:14.160 [2024-10-13 04:14:07.089381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:14.160 [2024-10-13 04:14:07.089386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:14.160 [2024-10-13 04:14:07.089391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:14.160 [2024-10-13 04:14:07.089397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.160 [2024-10-13 04:14:07.089404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:14.160 [2024-10-13 04:14:07.089410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:23:14.160 [2024-10-13 04:14:07.089416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.099093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.160 [2024-10-13 04:14:07.099114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:14.160 [2024-10-13 04:14:07.099122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.662 ms 00:23:14.160 [2024-10-13 04:14:07.099129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.099398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:14.160 [2024-10-13 04:14:07.099405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:14.160 [2024-10-13 04:14:07.099415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:23:14.160 [2024-10-13 04:14:07.099421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.132996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.133023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:14.160 [2024-10-13 04:14:07.133031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.133038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.133060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.133066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:14.160 [2024-10-13 04:14:07.133076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.133082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.133145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.133153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:14.160 [2024-10-13 04:14:07.133159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.133165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.133178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.133185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:14.160 [2024-10-13 04:14:07.133191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.133200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.191233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.191265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:14.160 [2024-10-13 04:14:07.191274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.191280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.240125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:14.160 [2024-10-13 04:14:07.240133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.240143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.240197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:14.160 [2024-10-13 04:14:07.240204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.240209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.240258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:14.160 [2024-10-13 04:14:07.240265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.240270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.240352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:14.160 [2024-10-13 04:14:07.240361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.240367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.240397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:14.160 [2024-10-13 04:14:07.240402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.240408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.240445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:14.160 [2024-10-13 04:14:07.240450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.240456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:14.160 [2024-10-13 04:14:07.240494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:14.160 [2024-10-13 04:14:07.240500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:14.160 [2024-10-13 04:14:07.240506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:14.160 [2024-10-13 04:14:07.240596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 196.652 ms, result 0 00:23:14.732 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:14.733 Remove shared memory files 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid77959 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:14.733 04:14:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:15.005 ************************************ 00:23:15.005 END TEST ftl_upgrade_shutdown 00:23:15.005 ************************************ 00:23:15.005 00:23:15.005 real 1m15.711s 00:23:15.005 user 1m45.857s 00:23:15.005 sys 0m17.123s 00:23:15.005 04:14:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.005 04:14:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.005 04:14:07 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:23:15.005 04:14:07 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:15.005 04:14:07 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:23:15.005 04:14:07 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.005 04:14:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:15.005 ************************************ 00:23:15.005 START TEST ftl_restore_fast 00:23:15.005 ************************************ 00:23:15.005 04:14:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:15.005 * Looking for test storage... 00:23:15.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.005 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.006 --rc genhtml_branch_coverage=1 00:23:15.006 --rc genhtml_function_coverage=1 00:23:15.006 --rc genhtml_legend=1 00:23:15.006 --rc geninfo_all_blocks=1 00:23:15.006 --rc geninfo_unexecuted_blocks=1 00:23:15.006 00:23:15.006 ' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.006 --rc genhtml_branch_coverage=1 00:23:15.006 --rc genhtml_function_coverage=1 00:23:15.006 --rc genhtml_legend=1 00:23:15.006 --rc geninfo_all_blocks=1 00:23:15.006 --rc geninfo_unexecuted_blocks=1 00:23:15.006 00:23:15.006 ' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.006 --rc genhtml_branch_coverage=1 00:23:15.006 --rc genhtml_function_coverage=1 00:23:15.006 --rc genhtml_legend=1 00:23:15.006 --rc geninfo_all_blocks=1 00:23:15.006 --rc geninfo_unexecuted_blocks=1 00:23:15.006 00:23:15.006 ' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.006 --rc genhtml_branch_coverage=1 00:23:15.006 --rc genhtml_function_coverage=1 00:23:15.006 --rc genhtml_legend=1 00:23:15.006 --rc geninfo_all_blocks=1 00:23:15.006 --rc geninfo_unexecuted_blocks=1 00:23:15.006 00:23:15.006 ' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.IPVQJYwB9d 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=78425 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 78425 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 78425 ']' 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.006 04:14:08 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:23:15.267 [2024-10-13 04:14:08.183630] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:15.267 [2024-10-13 04:14:08.183933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78425 ] 00:23:15.267 [2024-10-13 04:14:08.332537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.267 [2024-10-13 04:14:08.422581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:16.209 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:16.470 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:16.470 { 00:23:16.470 "name": "nvme0n1", 00:23:16.470 "aliases": [ 00:23:16.470 "70570380-9523-45e5-ac3f-15e832e9f45a" 00:23:16.470 ], 00:23:16.470 "product_name": "NVMe disk", 00:23:16.470 "block_size": 4096, 00:23:16.470 "num_blocks": 1310720, 00:23:16.470 "uuid": "70570380-9523-45e5-ac3f-15e832e9f45a", 00:23:16.470 "numa_id": -1, 00:23:16.470 "assigned_rate_limits": { 00:23:16.470 "rw_ios_per_sec": 0, 00:23:16.470 "rw_mbytes_per_sec": 0, 00:23:16.470 "r_mbytes_per_sec": 0, 00:23:16.470 "w_mbytes_per_sec": 0 00:23:16.470 }, 00:23:16.470 "claimed": true, 00:23:16.470 "claim_type": "read_many_write_one", 00:23:16.470 "zoned": false, 00:23:16.470 "supported_io_types": { 00:23:16.470 "read": true, 00:23:16.470 "write": true, 00:23:16.470 "unmap": true, 00:23:16.470 "flush": true, 00:23:16.470 "reset": true, 00:23:16.470 "nvme_admin": true, 00:23:16.470 "nvme_io": true, 00:23:16.470 "nvme_io_md": false, 00:23:16.470 "write_zeroes": true, 00:23:16.470 "zcopy": false, 00:23:16.470 "get_zone_info": false, 00:23:16.470 "zone_management": false, 00:23:16.470 "zone_append": false, 00:23:16.470 "compare": true, 00:23:16.470 "compare_and_write": false, 00:23:16.470 "abort": true, 00:23:16.470 "seek_hole": false, 00:23:16.471 "seek_data": false, 00:23:16.471 "copy": true, 00:23:16.471 "nvme_iov_md": false 00:23:16.471 }, 00:23:16.471 "driver_specific": { 00:23:16.471 "nvme": [ 00:23:16.471 { 00:23:16.471 "pci_address": "0000:00:11.0", 00:23:16.471 "trid": { 00:23:16.471 "trtype": "PCIe", 00:23:16.471 "traddr": "0000:00:11.0" 00:23:16.471 }, 00:23:16.471 "ctrlr_data": { 00:23:16.471 "cntlid": 0, 00:23:16.471 "vendor_id": "0x1b36", 00:23:16.471 "model_number": "QEMU NVMe Ctrl", 00:23:16.471 "serial_number": "12341", 00:23:16.471 "firmware_revision": "8.0.0", 00:23:16.471 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:16.471 "oacs": { 00:23:16.471 "security": 0, 00:23:16.471 "format": 1, 00:23:16.471 "firmware": 0, 00:23:16.471 "ns_manage": 1 00:23:16.471 }, 00:23:16.471 "multi_ctrlr": false, 00:23:16.471 "ana_reporting": false 00:23:16.471 }, 00:23:16.471 "vs": { 00:23:16.471 "nvme_version": "1.4" 00:23:16.471 }, 00:23:16.471 "ns_data": { 00:23:16.471 "id": 1, 00:23:16.471 "can_share": false 00:23:16.471 } 00:23:16.471 } 00:23:16.471 ], 00:23:16.471 "mp_policy": "active_passive" 00:23:16.471 } 00:23:16.471 } 00:23:16.471 ]' 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:16.471 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:16.731 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=839809c3-d701-40d8-a5c8-463058a9d96d 00:23:16.731 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:23:16.731 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 839809c3-d701-40d8-a5c8-463058a9d96d 00:23:16.991 04:14:09 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=23fcb54c-15e7-421d-8ef0-9cc765ef80de 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 23fcb54c-15e7-421d-8ef0-9cc765ef80de 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:17.253 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.513 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:17.513 { 00:23:17.513 "name": "cab27d7c-71d6-45df-b3d8-8021310e66be", 00:23:17.513 "aliases": [ 00:23:17.513 "lvs/nvme0n1p0" 00:23:17.513 ], 00:23:17.513 "product_name": "Logical Volume", 00:23:17.513 "block_size": 4096, 00:23:17.513 "num_blocks": 26476544, 00:23:17.513 "uuid": "cab27d7c-71d6-45df-b3d8-8021310e66be", 00:23:17.513 "assigned_rate_limits": { 00:23:17.513 "rw_ios_per_sec": 0, 00:23:17.513 "rw_mbytes_per_sec": 0, 00:23:17.513 "r_mbytes_per_sec": 0, 00:23:17.513 "w_mbytes_per_sec": 0 00:23:17.513 }, 00:23:17.513 "claimed": false, 00:23:17.513 "zoned": false, 00:23:17.513 "supported_io_types": { 00:23:17.513 "read": true, 00:23:17.513 "write": true, 00:23:17.513 "unmap": true, 00:23:17.513 "flush": false, 00:23:17.513 "reset": true, 00:23:17.513 "nvme_admin": false, 00:23:17.513 "nvme_io": false, 00:23:17.513 "nvme_io_md": false, 00:23:17.513 "write_zeroes": true, 00:23:17.513 "zcopy": false, 00:23:17.513 "get_zone_info": false, 00:23:17.513 "zone_management": false, 00:23:17.513 "zone_append": false, 00:23:17.513 "compare": false, 00:23:17.513 "compare_and_write": false, 00:23:17.513 "abort": false, 00:23:17.513 "seek_hole": true, 00:23:17.513 "seek_data": true, 00:23:17.513 "copy": false, 00:23:17.513 "nvme_iov_md": false 00:23:17.513 }, 00:23:17.513 "driver_specific": { 00:23:17.513 "lvol": { 00:23:17.513 "lvol_store_uuid": "23fcb54c-15e7-421d-8ef0-9cc765ef80de", 00:23:17.513 "base_bdev": "nvme0n1", 00:23:17.513 "thin_provision": true, 00:23:17.513 "num_allocated_clusters": 0, 00:23:17.513 "snapshot": false, 00:23:17.513 "clone": false, 00:23:17.513 "esnap_clone": false 00:23:17.513 } 00:23:17.513 } 00:23:17.513 } 00:23:17.513 ]' 00:23:17.513 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:17.513 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:17.513 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:17.513 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:17.513 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:17.514 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:17.514 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:23:17.514 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:23:17.514 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:17.773 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:17.773 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:17.774 04:14:10 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.774 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:17.774 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:17.774 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:17.774 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:17.774 04:14:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:18.035 { 00:23:18.035 "name": "cab27d7c-71d6-45df-b3d8-8021310e66be", 00:23:18.035 "aliases": [ 00:23:18.035 "lvs/nvme0n1p0" 00:23:18.035 ], 00:23:18.035 "product_name": "Logical Volume", 00:23:18.035 "block_size": 4096, 00:23:18.035 "num_blocks": 26476544, 00:23:18.035 "uuid": "cab27d7c-71d6-45df-b3d8-8021310e66be", 00:23:18.035 "assigned_rate_limits": { 00:23:18.035 "rw_ios_per_sec": 0, 00:23:18.035 "rw_mbytes_per_sec": 0, 00:23:18.035 "r_mbytes_per_sec": 0, 00:23:18.035 "w_mbytes_per_sec": 0 00:23:18.035 }, 00:23:18.035 "claimed": false, 00:23:18.035 "zoned": false, 00:23:18.035 "supported_io_types": { 00:23:18.035 "read": true, 00:23:18.035 "write": true, 00:23:18.035 "unmap": true, 00:23:18.035 "flush": false, 00:23:18.035 "reset": true, 00:23:18.035 "nvme_admin": false, 00:23:18.035 "nvme_io": false, 00:23:18.035 "nvme_io_md": false, 00:23:18.035 "write_zeroes": true, 00:23:18.035 "zcopy": false, 00:23:18.035 "get_zone_info": false, 00:23:18.035 "zone_management": false, 00:23:18.035 "zone_append": false, 00:23:18.035 "compare": false, 00:23:18.035 "compare_and_write": false, 00:23:18.035 "abort": false, 00:23:18.035 "seek_hole": true, 00:23:18.035 "seek_data": true, 00:23:18.035 "copy": false, 00:23:18.035 "nvme_iov_md": false 00:23:18.035 }, 00:23:18.035 "driver_specific": { 00:23:18.035 "lvol": { 00:23:18.035 "lvol_store_uuid": "23fcb54c-15e7-421d-8ef0-9cc765ef80de", 00:23:18.035 "base_bdev": "nvme0n1", 00:23:18.035 "thin_provision": true, 00:23:18.035 "num_allocated_clusters": 0, 00:23:18.035 "snapshot": false, 00:23:18.035 "clone": false, 00:23:18.035 "esnap_clone": false 00:23:18.035 } 00:23:18.035 } 00:23:18.035 } 00:23:18.035 ]' 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:23:18.035 04:14:11 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:18.321 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:18.322 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:18.322 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:18.322 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:18.322 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:18.322 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:18.322 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cab27d7c-71d6-45df-b3d8-8021310e66be 00:23:18.598 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:18.598 { 00:23:18.598 "name": "cab27d7c-71d6-45df-b3d8-8021310e66be", 00:23:18.598 "aliases": [ 00:23:18.598 "lvs/nvme0n1p0" 00:23:18.598 ], 00:23:18.598 "product_name": "Logical Volume", 00:23:18.598 "block_size": 4096, 00:23:18.598 "num_blocks": 26476544, 00:23:18.598 "uuid": "cab27d7c-71d6-45df-b3d8-8021310e66be", 00:23:18.598 "assigned_rate_limits": { 00:23:18.598 "rw_ios_per_sec": 0, 00:23:18.598 "rw_mbytes_per_sec": 0, 00:23:18.598 "r_mbytes_per_sec": 0, 00:23:18.598 "w_mbytes_per_sec": 0 00:23:18.598 }, 00:23:18.598 "claimed": false, 00:23:18.598 "zoned": false, 00:23:18.598 "supported_io_types": { 00:23:18.599 "read": true, 00:23:18.599 "write": true, 00:23:18.599 "unmap": true, 00:23:18.599 "flush": false, 00:23:18.599 "reset": true, 00:23:18.599 "nvme_admin": false, 00:23:18.599 "nvme_io": false, 00:23:18.599 "nvme_io_md": false, 00:23:18.599 "write_zeroes": true, 00:23:18.599 "zcopy": false, 00:23:18.599 "get_zone_info": false, 00:23:18.599 "zone_management": false, 00:23:18.599 "zone_append": false, 00:23:18.599 "compare": false, 00:23:18.599 "compare_and_write": false, 00:23:18.599 "abort": false, 00:23:18.599 "seek_hole": true, 00:23:18.599 "seek_data": true, 00:23:18.599 "copy": false, 00:23:18.599 "nvme_iov_md": false 00:23:18.599 }, 00:23:18.599 "driver_specific": { 00:23:18.599 "lvol": { 00:23:18.599 "lvol_store_uuid": "23fcb54c-15e7-421d-8ef0-9cc765ef80de", 00:23:18.599 "base_bdev": "nvme0n1", 00:23:18.599 "thin_provision": true, 00:23:18.599 "num_allocated_clusters": 0, 00:23:18.599 "snapshot": false, 00:23:18.599 "clone": false, 00:23:18.599 "esnap_clone": false 00:23:18.599 } 00:23:18.599 } 00:23:18.599 } 00:23:18.599 ]' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cab27d7c-71d6-45df-b3d8-8021310e66be --l2p_dram_limit 10' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:23:18.599 04:14:11 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cab27d7c-71d6-45df-b3d8-8021310e66be --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:23:18.599 [2024-10-13 04:14:11.718093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.718132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.599 [2024-10-13 04:14:11.718144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.599 [2024-10-13 04:14:11.718151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.718194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.718203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.599 [2024-10-13 04:14:11.718212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:18.599 [2024-10-13 04:14:11.718220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.718238] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.599 [2024-10-13 04:14:11.718808] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.599 [2024-10-13 04:14:11.718826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.718832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.599 [2024-10-13 04:14:11.718839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:23:18.599 [2024-10-13 04:14:11.718845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.718870] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fec26d04-fef4-44ba-8cc2-76710abc8644 00:23:18.599 [2024-10-13 04:14:11.719843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.719871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:18.599 [2024-10-13 04:14:11.719879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:18.599 [2024-10-13 04:14:11.719888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.724675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.724704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.599 [2024-10-13 04:14:11.724712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.751 ms 00:23:18.599 [2024-10-13 04:14:11.724720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.724824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.724834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.599 [2024-10-13 04:14:11.724841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:18.599 [2024-10-13 04:14:11.724851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.724889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.724898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.599 [2024-10-13 04:14:11.724904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:18.599 [2024-10-13 04:14:11.724911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.724927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:18.599 [2024-10-13 04:14:11.727808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.727833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.599 [2024-10-13 04:14:11.727842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.883 ms 00:23:18.599 [2024-10-13 04:14:11.727852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.727878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.727884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.599 [2024-10-13 04:14:11.727892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:18.599 [2024-10-13 04:14:11.727898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.727911] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:18.599 [2024-10-13 04:14:11.728015] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:18.599 [2024-10-13 04:14:11.728027] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.599 [2024-10-13 04:14:11.728036] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:18.599 [2024-10-13 04:14:11.728045] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.599 [2024-10-13 04:14:11.728052] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.599 [2024-10-13 04:14:11.728060] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:18.599 [2024-10-13 04:14:11.728065] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.599 [2024-10-13 04:14:11.728072] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:18.599 [2024-10-13 04:14:11.728077] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:18.599 [2024-10-13 04:14:11.728085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.728091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.599 [2024-10-13 04:14:11.728099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:23:18.599 [2024-10-13 04:14:11.728117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.728182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.599 [2024-10-13 04:14:11.728189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.599 [2024-10-13 04:14:11.728195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:18.599 [2024-10-13 04:14:11.728201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.599 [2024-10-13 04:14:11.728275] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.599 [2024-10-13 04:14:11.728283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.599 [2024-10-13 04:14:11.728292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.599 [2024-10-13 04:14:11.728298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.599 [2024-10-13 04:14:11.728305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.599 [2024-10-13 04:14:11.728309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.599 [2024-10-13 04:14:11.728316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:18.599 [2024-10-13 04:14:11.728321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.599 [2024-10-13 04:14:11.728327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:18.599 [2024-10-13 04:14:11.728332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.599 [2024-10-13 04:14:11.728338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.599 [2024-10-13 04:14:11.728343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:18.599 [2024-10-13 04:14:11.728349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.599 [2024-10-13 04:14:11.728354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.599 [2024-10-13 04:14:11.728360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:18.599 [2024-10-13 04:14:11.728365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.599 [2024-10-13 04:14:11.728373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.599 [2024-10-13 04:14:11.728378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:18.599 [2024-10-13 04:14:11.728385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.599 [2024-10-13 04:14:11.728391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.599 [2024-10-13 04:14:11.728399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:18.599 [2024-10-13 04:14:11.728404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.599 [2024-10-13 04:14:11.728411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.599 [2024-10-13 04:14:11.728416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:18.599 [2024-10-13 04:14:11.728422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.599 [2024-10-13 04:14:11.728426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.600 [2024-10-13 04:14:11.728432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:18.600 [2024-10-13 04:14:11.728437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.600 [2024-10-13 04:14:11.728444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.600 [2024-10-13 04:14:11.728449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:18.600 [2024-10-13 04:14:11.728455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.600 [2024-10-13 04:14:11.728460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.600 [2024-10-13 04:14:11.728468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:18.600 [2024-10-13 04:14:11.728473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.600 [2024-10-13 04:14:11.728479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.600 [2024-10-13 04:14:11.728484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:18.600 [2024-10-13 04:14:11.728490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.600 [2024-10-13 04:14:11.728496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:18.600 [2024-10-13 04:14:11.728502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:18.600 [2024-10-13 04:14:11.728506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.600 [2024-10-13 04:14:11.728512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:18.600 [2024-10-13 04:14:11.728517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:18.600 [2024-10-13 04:14:11.728523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.600 [2024-10-13 04:14:11.728527] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.600 [2024-10-13 04:14:11.728534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.600 [2024-10-13 04:14:11.728539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.600 [2024-10-13 04:14:11.728546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.600 [2024-10-13 04:14:11.728552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.600 [2024-10-13 04:14:11.728560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.600 [2024-10-13 04:14:11.728565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.600 [2024-10-13 04:14:11.728571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.600 [2024-10-13 04:14:11.728576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.600 [2024-10-13 04:14:11.728583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.600 [2024-10-13 04:14:11.728592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.600 [2024-10-13 04:14:11.728600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.600 [2024-10-13 04:14:11.728607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:18.600 [2024-10-13 04:14:11.728623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:18.600 [2024-10-13 04:14:11.728629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:18.600 [2024-10-13 04:14:11.728636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:18.600 [2024-10-13 04:14:11.728641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:18.600 [2024-10-13 04:14:11.728648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:18.600 [2024-10-13 04:14:11.728653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:18.600 [2024-10-13 04:14:11.728660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:18.600 [2024-10-13 04:14:11.728665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:18.600 [2024-10-13 04:14:11.728673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:18.600 [2024-10-13 04:14:11.728680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:18.600 [2024-10-13 04:14:11.728686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:18.600 [2024-10-13 04:14:11.728692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:18.600 [2024-10-13 04:14:11.728698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:18.600 [2024-10-13 04:14:11.728704] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.600 [2024-10-13 04:14:11.728711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.600 [2024-10-13 04:14:11.728720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.600 [2024-10-13 04:14:11.728727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.600 [2024-10-13 04:14:11.728732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.600 [2024-10-13 04:14:11.728739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.600 [2024-10-13 04:14:11.728744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.600 [2024-10-13 04:14:11.728751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.600 [2024-10-13 04:14:11.728757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:23:18.600 [2024-10-13 04:14:11.728764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.600 [2024-10-13 04:14:11.728804] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:18.600 [2024-10-13 04:14:11.728815] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:21.147 [2024-10-13 04:14:14.300580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.147 [2024-10-13 04:14:14.300682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:21.147 [2024-10-13 04:14:14.300700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2571.764 ms 00:23:21.147 [2024-10-13 04:14:14.300710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.405 [2024-10-13 04:14:14.327314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.405 [2024-10-13 04:14:14.327374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.405 [2024-10-13 04:14:14.327386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.385 ms 00:23:21.405 [2024-10-13 04:14:14.327395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.405 [2024-10-13 04:14:14.327512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.405 [2024-10-13 04:14:14.327523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.405 [2024-10-13 04:14:14.327531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:21.405 [2024-10-13 04:14:14.327541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.405 [2024-10-13 04:14:14.354826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.354870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.406 [2024-10-13 04:14:14.354880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.257 ms 00:23:21.406 [2024-10-13 04:14:14.354889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.354913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.354923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.406 [2024-10-13 04:14:14.354930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:21.406 [2024-10-13 04:14:14.354940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.355396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.355414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.406 [2024-10-13 04:14:14.355423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:23:21.406 [2024-10-13 04:14:14.355431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.355518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.355526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.406 [2024-10-13 04:14:14.355533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:21.406 [2024-10-13 04:14:14.355543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.368244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.368279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.406 [2024-10-13 04:14:14.368288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:23:21.406 [2024-10-13 04:14:14.368297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.377561] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:21.406 [2024-10-13 04:14:14.380150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.380179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.406 [2024-10-13 04:14:14.380189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.791 ms 00:23:21.406 [2024-10-13 04:14:14.380197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.443282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.443323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:21.406 [2024-10-13 04:14:14.443338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.060 ms 00:23:21.406 [2024-10-13 04:14:14.443346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.443495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.443504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.406 [2024-10-13 04:14:14.443515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:23:21.406 [2024-10-13 04:14:14.443524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.460971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.461000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:21.406 [2024-10-13 04:14:14.461011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.410 ms 00:23:21.406 [2024-10-13 04:14:14.461017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.478108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.478134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:21.406 [2024-10-13 04:14:14.478144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.056 ms 00:23:21.406 [2024-10-13 04:14:14.478150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.478590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.478604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.406 [2024-10-13 04:14:14.478620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:23:21.406 [2024-10-13 04:14:14.478626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.533454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.533486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:21.406 [2024-10-13 04:14:14.533498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.799 ms 00:23:21.406 [2024-10-13 04:14:14.533505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.406 [2024-10-13 04:14:14.552314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.406 [2024-10-13 04:14:14.552345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:21.406 [2024-10-13 04:14:14.552356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.748 ms 00:23:21.406 [2024-10-13 04:14:14.552362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.665 [2024-10-13 04:14:14.569826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.665 [2024-10-13 04:14:14.569855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:21.665 [2024-10-13 04:14:14.569864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.432 ms 00:23:21.665 [2024-10-13 04:14:14.569870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.665 [2024-10-13 04:14:14.587538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.665 [2024-10-13 04:14:14.587579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.665 [2024-10-13 04:14:14.587590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.637 ms 00:23:21.665 [2024-10-13 04:14:14.587596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.665 [2024-10-13 04:14:14.587636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.665 [2024-10-13 04:14:14.587644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.665 [2024-10-13 04:14:14.587653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.665 [2024-10-13 04:14:14.587659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.665 [2024-10-13 04:14:14.587721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.665 [2024-10-13 04:14:14.587729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.665 [2024-10-13 04:14:14.587737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:21.665 [2024-10-13 04:14:14.587742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.665 [2024-10-13 04:14:14.588464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2870.046 ms, result 0 00:23:21.665 { 00:23:21.665 "name": "ftl0", 00:23:21.665 "uuid": "fec26d04-fef4-44ba-8cc2-76710abc8644" 00:23:21.665 } 00:23:21.665 04:14:14 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:21.665 04:14:14 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:21.665 04:14:14 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:23:21.665 04:14:14 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:21.923 [2024-10-13 04:14:15.000158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.000195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.923 [2024-10-13 04:14:15.000205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:21.923 [2024-10-13 04:14:15.000219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.000237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.923 [2024-10-13 04:14:15.002345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.002379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.923 [2024-10-13 04:14:15.002391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:23:21.923 [2024-10-13 04:14:15.002398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.002605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.002623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.923 [2024-10-13 04:14:15.002632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:23:21.923 [2024-10-13 04:14:15.002638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.005089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.005110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.923 [2024-10-13 04:14:15.005118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:23:21.923 [2024-10-13 04:14:15.005124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.009747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.009775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:21.923 [2024-10-13 04:14:15.009784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:23:21.923 [2024-10-13 04:14:15.009791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.028373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.028404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.923 [2024-10-13 04:14:15.028414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.536 ms 00:23:21.923 [2024-10-13 04:14:15.028420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.040416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.040445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.923 [2024-10-13 04:14:15.040458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.963 ms 00:23:21.923 [2024-10-13 04:14:15.040464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.040578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.040586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.923 [2024-10-13 04:14:15.040594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:21.923 [2024-10-13 04:14:15.040601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.058626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.058655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:21.923 [2024-10-13 04:14:15.058665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.011 ms 00:23:21.923 [2024-10-13 04:14:15.058670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.923 [2024-10-13 04:14:15.075912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.923 [2024-10-13 04:14:15.075940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:21.923 [2024-10-13 04:14:15.075950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.211 ms 00:23:21.923 [2024-10-13 04:14:15.075955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.183 [2024-10-13 04:14:15.093377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.183 [2024-10-13 04:14:15.093406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:22.183 [2024-10-13 04:14:15.093415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.390 ms 00:23:22.183 [2024-10-13 04:14:15.093420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.183 [2024-10-13 04:14:15.110343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.183 [2024-10-13 04:14:15.110373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:22.183 [2024-10-13 04:14:15.110382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.866 ms 00:23:22.183 [2024-10-13 04:14:15.110387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.183 [2024-10-13 04:14:15.110416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:22.183 [2024-10-13 04:14:15.110427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:22.183 [2024-10-13 04:14:15.110762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.110999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:22.184 [2024-10-13 04:14:15.111103] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:22.184 [2024-10-13 04:14:15.111110] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fec26d04-fef4-44ba-8cc2-76710abc8644 00:23:22.184 [2024-10-13 04:14:15.111116] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:22.184 [2024-10-13 04:14:15.111124] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:22.184 [2024-10-13 04:14:15.111131] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:22.184 [2024-10-13 04:14:15.111138] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:22.184 [2024-10-13 04:14:15.111144] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:22.184 [2024-10-13 04:14:15.111153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:22.184 [2024-10-13 04:14:15.111161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:22.184 [2024-10-13 04:14:15.111167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:22.184 [2024-10-13 04:14:15.111171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:22.184 [2024-10-13 04:14:15.111178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.184 [2024-10-13 04:14:15.111183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:22.184 [2024-10-13 04:14:15.111191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:23:22.184 [2024-10-13 04:14:15.111196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.120653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.184 [2024-10-13 04:14:15.120680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:22.184 [2024-10-13 04:14:15.120689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.432 ms 00:23:22.184 [2024-10-13 04:14:15.120694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.120963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.184 [2024-10-13 04:14:15.120971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:22.184 [2024-10-13 04:14:15.120979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:23:22.184 [2024-10-13 04:14:15.120984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.154376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.184 [2024-10-13 04:14:15.154406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.184 [2024-10-13 04:14:15.154416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.184 [2024-10-13 04:14:15.154422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.154467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.184 [2024-10-13 04:14:15.154473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.184 [2024-10-13 04:14:15.154480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.184 [2024-10-13 04:14:15.154486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.154541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.184 [2024-10-13 04:14:15.154548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.184 [2024-10-13 04:14:15.154555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.184 [2024-10-13 04:14:15.154561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.154577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.184 [2024-10-13 04:14:15.154583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.184 [2024-10-13 04:14:15.154590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.184 [2024-10-13 04:14:15.154595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.215334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.184 [2024-10-13 04:14:15.215372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.184 [2024-10-13 04:14:15.215383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.184 [2024-10-13 04:14:15.215389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.184 [2024-10-13 04:14:15.264874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.185 [2024-10-13 04:14:15.264909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.185 [2024-10-13 04:14:15.264919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.185 [2024-10-13 04:14:15.264925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.185 [2024-10-13 04:14:15.264998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.185 [2024-10-13 04:14:15.265008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.185 [2024-10-13 04:14:15.265016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.185 [2024-10-13 04:14:15.265021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.185 [2024-10-13 04:14:15.265059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.185 [2024-10-13 04:14:15.265067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.185 [2024-10-13 04:14:15.265074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.185 [2024-10-13 04:14:15.265080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.185 [2024-10-13 04:14:15.265149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.185 [2024-10-13 04:14:15.265157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.185 [2024-10-13 04:14:15.265166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.185 [2024-10-13 04:14:15.265171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.185 [2024-10-13 04:14:15.265200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.185 [2024-10-13 04:14:15.265206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:22.185 [2024-10-13 04:14:15.265214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.185 [2024-10-13 04:14:15.265219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.185 [2024-10-13 04:14:15.265251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.185 [2024-10-13 04:14:15.265257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.185 [2024-10-13 04:14:15.265265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.185 [2024-10-13 04:14:15.265272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.185 [2024-10-13 04:14:15.265308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.185 [2024-10-13 04:14:15.265315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.185 [2024-10-13 04:14:15.265323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.185 [2024-10-13 04:14:15.265328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.185 [2024-10-13 04:14:15.265429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.242 ms, result 0 00:23:22.185 true 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 78425 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78425 ']' 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78425 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78425 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.185 killing process with pid 78425 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78425' 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 78425 00:23:22.185 04:14:15 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 78425 00:23:30.297 04:14:22 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:33.620 262144+0 records in 00:23:33.620 262144+0 records out 00:23:33.620 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.85293 s, 279 MB/s 00:23:33.620 04:14:26 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:34.994 04:14:28 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:34.994 [2024-10-13 04:14:28.061898] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:23:34.994 [2024-10-13 04:14:28.061987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78628 ] 00:23:35.252 [2024-10-13 04:14:28.205326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.252 [2024-10-13 04:14:28.299167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.510 [2024-10-13 04:14:28.549664] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.510 [2024-10-13 04:14:28.549724] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.772 [2024-10-13 04:14:28.706603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.706662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:35.772 [2024-10-13 04:14:28.706675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:35.772 [2024-10-13 04:14:28.706687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.706734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.706744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.772 [2024-10-13 04:14:28.706752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:35.772 [2024-10-13 04:14:28.706761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.706779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:35.772 [2024-10-13 04:14:28.707474] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:35.772 [2024-10-13 04:14:28.707489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.707499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.772 [2024-10-13 04:14:28.707506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:23:35.772 [2024-10-13 04:14:28.707513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.708574] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:35.772 [2024-10-13 04:14:28.721301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.721340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:35.772 [2024-10-13 04:14:28.721351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.729 ms 00:23:35.772 [2024-10-13 04:14:28.721359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.721413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.721422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:35.772 [2024-10-13 04:14:28.721432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:35.772 [2024-10-13 04:14:28.721439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.726585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.726625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.772 [2024-10-13 04:14:28.726635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:23:35.772 [2024-10-13 04:14:28.726642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.726711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.726720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.772 [2024-10-13 04:14:28.726728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:35.772 [2024-10-13 04:14:28.726735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.726781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.726790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:35.772 [2024-10-13 04:14:28.726798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:35.772 [2024-10-13 04:14:28.726805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.772 [2024-10-13 04:14:28.726825] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:35.772 [2024-10-13 04:14:28.730159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.772 [2024-10-13 04:14:28.730195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.772 [2024-10-13 04:14:28.730204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:23:35.772 [2024-10-13 04:14:28.730212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.773 [2024-10-13 04:14:28.730240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.773 [2024-10-13 04:14:28.730248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:35.773 [2024-10-13 04:14:28.730255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:35.773 [2024-10-13 04:14:28.730262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.773 [2024-10-13 04:14:28.730281] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:35.773 [2024-10-13 04:14:28.730299] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:35.773 [2024-10-13 04:14:28.730334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:35.773 [2024-10-13 04:14:28.730351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:35.773 [2024-10-13 04:14:28.730453] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:35.773 [2024-10-13 04:14:28.730463] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:35.773 [2024-10-13 04:14:28.730474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:35.773 [2024-10-13 04:14:28.730484] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730492] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730499] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:35.773 [2024-10-13 04:14:28.730506] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:35.773 [2024-10-13 04:14:28.730513] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:35.773 [2024-10-13 04:14:28.730521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:35.773 [2024-10-13 04:14:28.730528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.773 [2024-10-13 04:14:28.730537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:35.773 [2024-10-13 04:14:28.730545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:23:35.773 [2024-10-13 04:14:28.730552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.773 [2024-10-13 04:14:28.730644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.773 [2024-10-13 04:14:28.730653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:35.773 [2024-10-13 04:14:28.730660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:35.773 [2024-10-13 04:14:28.730667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.773 [2024-10-13 04:14:28.730778] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:35.773 [2024-10-13 04:14:28.730788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:35.773 [2024-10-13 04:14:28.730798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:35.773 [2024-10-13 04:14:28.730819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:35.773 [2024-10-13 04:14:28.730840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.773 [2024-10-13 04:14:28.730852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:35.773 [2024-10-13 04:14:28.730860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:35.773 [2024-10-13 04:14:28.730866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.773 [2024-10-13 04:14:28.730873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:35.773 [2024-10-13 04:14:28.730880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:35.773 [2024-10-13 04:14:28.730891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:35.773 [2024-10-13 04:14:28.730904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:35.773 [2024-10-13 04:14:28.730924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:35.773 [2024-10-13 04:14:28.730943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:35.773 [2024-10-13 04:14:28.730962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:35.773 [2024-10-13 04:14:28.730981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:35.773 [2024-10-13 04:14:28.730987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.773 [2024-10-13 04:14:28.730993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:35.773 [2024-10-13 04:14:28.731000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:35.773 [2024-10-13 04:14:28.731007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.773 [2024-10-13 04:14:28.731014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:35.773 [2024-10-13 04:14:28.731020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:35.773 [2024-10-13 04:14:28.731026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.773 [2024-10-13 04:14:28.731033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:35.773 [2024-10-13 04:14:28.731040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:35.773 [2024-10-13 04:14:28.731046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.773 [2024-10-13 04:14:28.731053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:35.773 [2024-10-13 04:14:28.731059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:35.773 [2024-10-13 04:14:28.731065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.773 [2024-10-13 04:14:28.731072] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:35.773 [2024-10-13 04:14:28.731080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:35.773 [2024-10-13 04:14:28.731087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.773 [2024-10-13 04:14:28.731094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.773 [2024-10-13 04:14:28.731101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:35.773 [2024-10-13 04:14:28.731108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:35.773 [2024-10-13 04:14:28.731114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:35.773 [2024-10-13 04:14:28.731120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:35.773 [2024-10-13 04:14:28.731127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:35.773 [2024-10-13 04:14:28.731133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:35.773 [2024-10-13 04:14:28.731141] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:35.773 [2024-10-13 04:14:28.731149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.773 [2024-10-13 04:14:28.731158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:35.773 [2024-10-13 04:14:28.731165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:35.773 [2024-10-13 04:14:28.731172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:35.773 [2024-10-13 04:14:28.731178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:35.773 [2024-10-13 04:14:28.731185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:35.773 [2024-10-13 04:14:28.731192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:35.773 [2024-10-13 04:14:28.731199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:35.773 [2024-10-13 04:14:28.731205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:35.773 [2024-10-13 04:14:28.731213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:35.773 [2024-10-13 04:14:28.731220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:35.773 [2024-10-13 04:14:28.731227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:35.773 [2024-10-13 04:14:28.731234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:35.773 [2024-10-13 04:14:28.731240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:35.773 [2024-10-13 04:14:28.731247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:35.773 [2024-10-13 04:14:28.731254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:35.773 [2024-10-13 04:14:28.731262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.773 [2024-10-13 04:14:28.731272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:35.773 [2024-10-13 04:14:28.731279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:35.773 [2024-10-13 04:14:28.731286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:35.773 [2024-10-13 04:14:28.731293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:35.773 [2024-10-13 04:14:28.731300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.773 [2024-10-13 04:14:28.731307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:35.773 [2024-10-13 04:14:28.731314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:23:35.773 [2024-10-13 04:14:28.731321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.757450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.757487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.774 [2024-10-13 04:14:28.757497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.089 ms 00:23:35.774 [2024-10-13 04:14:28.757505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.757585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.757596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:35.774 [2024-10-13 04:14:28.757604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:35.774 [2024-10-13 04:14:28.757611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.803857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.803899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.774 [2024-10-13 04:14:28.803912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.177 ms 00:23:35.774 [2024-10-13 04:14:28.803920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.803959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.803969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.774 [2024-10-13 04:14:28.803978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:35.774 [2024-10-13 04:14:28.803985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.804378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.804404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.774 [2024-10-13 04:14:28.804414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:23:35.774 [2024-10-13 04:14:28.804422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.804548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.804561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.774 [2024-10-13 04:14:28.804570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:35.774 [2024-10-13 04:14:28.804577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.817838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.817869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.774 [2024-10-13 04:14:28.817879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.239 ms 00:23:35.774 [2024-10-13 04:14:28.817889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.830809] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:35.774 [2024-10-13 04:14:28.830846] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:35.774 [2024-10-13 04:14:28.830857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.830865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:35.774 [2024-10-13 04:14:28.830874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.880 ms 00:23:35.774 [2024-10-13 04:14:28.830881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.855156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.855189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:35.774 [2024-10-13 04:14:28.855200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.236 ms 00:23:35.774 [2024-10-13 04:14:28.855212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.867234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.867273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:35.774 [2024-10-13 04:14:28.867284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.982 ms 00:23:35.774 [2024-10-13 04:14:28.867291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.879359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.879391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:35.774 [2024-10-13 04:14:28.879401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.035 ms 00:23:35.774 [2024-10-13 04:14:28.879408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.774 [2024-10-13 04:14:28.880018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.774 [2024-10-13 04:14:28.880037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:35.774 [2024-10-13 04:14:28.880046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:23:35.774 [2024-10-13 04:14:28.880054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.035 [2024-10-13 04:14:28.935550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.035 [2024-10-13 04:14:28.935597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.035 [2024-10-13 04:14:28.935610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.479 ms 00:23:36.035 [2024-10-13 04:14:28.935634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.945934] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:36.036 [2024-10-13 04:14:28.948324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.036 [2024-10-13 04:14:28.948357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.036 [2024-10-13 04:14:28.948368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.641 ms 00:23:36.036 [2024-10-13 04:14:28.948376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.948464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.036 [2024-10-13 04:14:28.948475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.036 [2024-10-13 04:14:28.948484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:36.036 [2024-10-13 04:14:28.948492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.948557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.036 [2024-10-13 04:14:28.948570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.036 [2024-10-13 04:14:28.948579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:36.036 [2024-10-13 04:14:28.948586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.948605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.036 [2024-10-13 04:14:28.948627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.036 [2024-10-13 04:14:28.948635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.036 [2024-10-13 04:14:28.948642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.948672] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.036 [2024-10-13 04:14:28.948681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.036 [2024-10-13 04:14:28.948689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.036 [2024-10-13 04:14:28.948699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:36.036 [2024-10-13 04:14:28.948706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.972276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.036 [2024-10-13 04:14:28.972313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.036 [2024-10-13 04:14:28.972325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.551 ms 00:23:36.036 [2024-10-13 04:14:28.972332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.972403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.036 [2024-10-13 04:14:28.972414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.036 [2024-10-13 04:14:28.972423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:36.036 [2024-10-13 04:14:28.972430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.036 [2024-10-13 04:14:28.973397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 266.353 ms, result 0 00:23:36.980  [2024-10-13T04:14:31.084Z] Copying: 16/1024 [MB] (16 MBps) [2024-10-13T04:14:32.027Z] Copying: 29/1024 [MB] (13 MBps) [2024-10-13T04:14:33.089Z] Copying: 59/1024 [MB] (29 MBps) [2024-10-13T04:14:34.032Z] Copying: 76/1024 [MB] (17 MBps) [2024-10-13T04:14:35.418Z] Copying: 108/1024 [MB] (31 MBps) [2024-10-13T04:14:35.991Z] Copying: 137/1024 [MB] (29 MBps) [2024-10-13T04:14:37.377Z] Copying: 161/1024 [MB] (23 MBps) [2024-10-13T04:14:38.318Z] Copying: 215/1024 [MB] (53 MBps) [2024-10-13T04:14:39.258Z] Copying: 261/1024 [MB] (45 MBps) [2024-10-13T04:14:40.201Z] Copying: 305/1024 [MB] (44 MBps) [2024-10-13T04:14:41.144Z] Copying: 329/1024 [MB] (24 MBps) [2024-10-13T04:14:42.087Z] Copying: 357/1024 [MB] (28 MBps) [2024-10-13T04:14:43.035Z] Copying: 397/1024 [MB] (39 MBps) [2024-10-13T04:14:44.422Z] Copying: 439/1024 [MB] (41 MBps) [2024-10-13T04:14:44.995Z] Copying: 460/1024 [MB] (20 MBps) [2024-10-13T04:14:46.381Z] Copying: 482/1024 [MB] (22 MBps) [2024-10-13T04:14:47.331Z] Copying: 496/1024 [MB] (13 MBps) [2024-10-13T04:14:48.272Z] Copying: 515/1024 [MB] (19 MBps) [2024-10-13T04:14:49.211Z] Copying: 526/1024 [MB] (11 MBps) [2024-10-13T04:14:50.151Z] Copying: 540/1024 [MB] (14 MBps) [2024-10-13T04:14:51.090Z] Copying: 557/1024 [MB] (16 MBps) [2024-10-13T04:14:52.031Z] Copying: 574/1024 [MB] (17 MBps) [2024-10-13T04:14:53.414Z] Copying: 591/1024 [MB] (16 MBps) [2024-10-13T04:14:54.354Z] Copying: 602/1024 [MB] (11 MBps) [2024-10-13T04:14:55.295Z] Copying: 618/1024 [MB] (16 MBps) [2024-10-13T04:14:56.234Z] Copying: 628/1024 [MB] (10 MBps) [2024-10-13T04:14:57.174Z] Copying: 644/1024 [MB] (15 MBps) [2024-10-13T04:14:58.115Z] Copying: 658/1024 [MB] (13 MBps) [2024-10-13T04:14:59.055Z] Copying: 669/1024 [MB] (10 MBps) [2024-10-13T04:14:59.996Z] Copying: 702/1024 [MB] (33 MBps) [2024-10-13T04:15:01.381Z] Copying: 714/1024 [MB] (11 MBps) [2024-10-13T04:15:02.325Z] Copying: 727/1024 [MB] (12 MBps) [2024-10-13T04:15:03.336Z] Copying: 741/1024 [MB] (14 MBps) [2024-10-13T04:15:04.279Z] Copying: 753/1024 [MB] (12 MBps) [2024-10-13T04:15:05.221Z] Copying: 770/1024 [MB] (17 MBps) [2024-10-13T04:15:06.164Z] Copying: 790/1024 [MB] (19 MBps) [2024-10-13T04:15:07.108Z] Copying: 800/1024 [MB] (10 MBps) [2024-10-13T04:15:08.052Z] Copying: 828/1024 [MB] (27 MBps) [2024-10-13T04:15:08.995Z] Copying: 848/1024 [MB] (20 MBps) [2024-10-13T04:15:10.382Z] Copying: 864/1024 [MB] (15 MBps) [2024-10-13T04:15:11.326Z] Copying: 892/1024 [MB] (27 MBps) [2024-10-13T04:15:12.270Z] Copying: 907/1024 [MB] (15 MBps) [2024-10-13T04:15:13.214Z] Copying: 922/1024 [MB] (15 MBps) [2024-10-13T04:15:14.158Z] Copying: 939/1024 [MB] (16 MBps) [2024-10-13T04:15:15.106Z] Copying: 955/1024 [MB] (15 MBps) [2024-10-13T04:15:16.049Z] Copying: 970/1024 [MB] (15 MBps) [2024-10-13T04:15:16.992Z] Copying: 984/1024 [MB] (14 MBps) [2024-10-13T04:15:18.378Z] Copying: 1004/1024 [MB] (19 MBps) [2024-10-13T04:15:18.641Z] Copying: 1018/1024 [MB] (14 MBps) [2024-10-13T04:15:18.641Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-10-13 04:15:18.476544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.481 [2024-10-13 04:15:18.476608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.481 [2024-10-13 04:15:18.476647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.481 [2024-10-13 04:15:18.476656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.481 [2024-10-13 04:15:18.476679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.481 [2024-10-13 04:15:18.479741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.481 [2024-10-13 04:15:18.479792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.481 [2024-10-13 04:15:18.479804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:24:25.481 [2024-10-13 04:15:18.479813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.481 [2024-10-13 04:15:18.483650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.481 [2024-10-13 04:15:18.483766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.481 [2024-10-13 04:15:18.483799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.802 ms 00:24:25.481 [2024-10-13 04:15:18.483822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.481 [2024-10-13 04:15:18.483891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.481 [2024-10-13 04:15:18.483914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:24:25.481 [2024-10-13 04:15:18.483937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:25.481 [2024-10-13 04:15:18.483957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.481 [2024-10-13 04:15:18.484063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.481 [2024-10-13 04:15:18.484087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:24:25.481 [2024-10-13 04:15:18.484147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:25.481 [2024-10-13 04:15:18.484167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.481 [2024-10-13 04:15:18.484202] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.481 [2024-10-13 04:15:18.484235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.484997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.481 [2024-10-13 04:15:18.485338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.485980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.482 [2024-10-13 04:15:18.486446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.482 [2024-10-13 04:15:18.486467] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fec26d04-fef4-44ba-8cc2-76710abc8644 00:24:25.482 [2024-10-13 04:15:18.486488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:25.482 [2024-10-13 04:15:18.486508] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:24:25.482 [2024-10-13 04:15:18.486528] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:25.482 [2024-10-13 04:15:18.486548] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:25.482 [2024-10-13 04:15:18.486567] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.482 [2024-10-13 04:15:18.486650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.482 [2024-10-13 04:15:18.486671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.482 [2024-10-13 04:15:18.486689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.482 [2024-10-13 04:15:18.486707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.482 [2024-10-13 04:15:18.486727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.482 [2024-10-13 04:15:18.486747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.482 [2024-10-13 04:15:18.486768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.526 ms 00:24:25.482 [2024-10-13 04:15:18.486788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.482 [2024-10-13 04:15:18.503159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.482 [2024-10-13 04:15:18.503210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.482 [2024-10-13 04:15:18.503222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.336 ms 00:24:25.482 [2024-10-13 04:15:18.503238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.482 [2024-10-13 04:15:18.503654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.482 [2024-10-13 04:15:18.503673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.482 [2024-10-13 04:15:18.503682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:24:25.482 [2024-10-13 04:15:18.503690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.482 [2024-10-13 04:15:18.540088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.482 [2024-10-13 04:15:18.540154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.482 [2024-10-13 04:15:18.540172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.482 [2024-10-13 04:15:18.540181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.482 [2024-10-13 04:15:18.540245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.482 [2024-10-13 04:15:18.540253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.482 [2024-10-13 04:15:18.540262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.482 [2024-10-13 04:15:18.540270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.482 [2024-10-13 04:15:18.540343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.482 [2024-10-13 04:15:18.540355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.482 [2024-10-13 04:15:18.540364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.482 [2024-10-13 04:15:18.540384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.482 [2024-10-13 04:15:18.540400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.482 [2024-10-13 04:15:18.540415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.482 [2024-10-13 04:15:18.540425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.482 [2024-10-13 04:15:18.540433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.482 [2024-10-13 04:15:18.624600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.482 [2024-10-13 04:15:18.624664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.482 [2024-10-13 04:15:18.624676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.482 [2024-10-13 04:15:18.624692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.693480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.791 [2024-10-13 04:15:18.693539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.791 [2024-10-13 04:15:18.693551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.791 [2024-10-13 04:15:18.693561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.693634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.791 [2024-10-13 04:15:18.693645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.791 [2024-10-13 04:15:18.693654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.791 [2024-10-13 04:15:18.693663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.693726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.791 [2024-10-13 04:15:18.693738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.791 [2024-10-13 04:15:18.693747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.791 [2024-10-13 04:15:18.693755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.693837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.791 [2024-10-13 04:15:18.693857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.791 [2024-10-13 04:15:18.693867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.791 [2024-10-13 04:15:18.693874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.693909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.791 [2024-10-13 04:15:18.693922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.791 [2024-10-13 04:15:18.693931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.791 [2024-10-13 04:15:18.693939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.693982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.791 [2024-10-13 04:15:18.693991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.791 [2024-10-13 04:15:18.694000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.791 [2024-10-13 04:15:18.694008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.694061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.791 [2024-10-13 04:15:18.694071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.791 [2024-10-13 04:15:18.694082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.791 [2024-10-13 04:15:18.694091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.791 [2024-10-13 04:15:18.694219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 217.642 ms, result 0 00:24:26.734 00:24:26.734 00:24:26.734 04:15:19 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:26.734 [2024-10-13 04:15:19.855958] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:24:26.734 [2024-10-13 04:15:19.856134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79165 ] 00:24:26.995 [2024-10-13 04:15:20.008154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.995 [2024-10-13 04:15:20.154343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.569 [2024-10-13 04:15:20.448517] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.569 [2024-10-13 04:15:20.448600] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.569 [2024-10-13 04:15:20.609660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.569 [2024-10-13 04:15:20.609725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:27.569 [2024-10-13 04:15:20.609740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:27.569 [2024-10-13 04:15:20.609755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.569 [2024-10-13 04:15:20.609812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.569 [2024-10-13 04:15:20.609824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.569 [2024-10-13 04:15:20.609833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:27.569 [2024-10-13 04:15:20.609844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.569 [2024-10-13 04:15:20.609865] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:27.569 [2024-10-13 04:15:20.610598] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:27.569 [2024-10-13 04:15:20.610642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.569 [2024-10-13 04:15:20.610653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.569 [2024-10-13 04:15:20.610664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:24:27.569 [2024-10-13 04:15:20.610671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.569 [2024-10-13 04:15:20.611310] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:24:27.569 [2024-10-13 04:15:20.611390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.569 [2024-10-13 04:15:20.611401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:27.569 [2024-10-13 04:15:20.611413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:27.569 [2024-10-13 04:15:20.611428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.569 [2024-10-13 04:15:20.611486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.569 [2024-10-13 04:15:20.611497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:27.570 [2024-10-13 04:15:20.611506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:27.570 [2024-10-13 04:15:20.611513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.611829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.570 [2024-10-13 04:15:20.611845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.570 [2024-10-13 04:15:20.611856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:24:27.570 [2024-10-13 04:15:20.611864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.611935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.570 [2024-10-13 04:15:20.611946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.570 [2024-10-13 04:15:20.611955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:27.570 [2024-10-13 04:15:20.611963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.611986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.570 [2024-10-13 04:15:20.611997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:27.570 [2024-10-13 04:15:20.612006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:27.570 [2024-10-13 04:15:20.612013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.612038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.570 [2024-10-13 04:15:20.616410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.570 [2024-10-13 04:15:20.616456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.570 [2024-10-13 04:15:20.616470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.378 ms 00:24:27.570 [2024-10-13 04:15:20.616477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.616512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.570 [2024-10-13 04:15:20.616521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:27.570 [2024-10-13 04:15:20.616530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:27.570 [2024-10-13 04:15:20.616538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.616602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:27.570 [2024-10-13 04:15:20.616643] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:27.570 [2024-10-13 04:15:20.616682] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:27.570 [2024-10-13 04:15:20.616702] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:27.570 [2024-10-13 04:15:20.616807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:27.570 [2024-10-13 04:15:20.616819] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:27.570 [2024-10-13 04:15:20.616830] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:27.570 [2024-10-13 04:15:20.616843] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:27.570 [2024-10-13 04:15:20.616854] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:27.570 [2024-10-13 04:15:20.616862] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:27.570 [2024-10-13 04:15:20.616871] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:27.570 [2024-10-13 04:15:20.616881] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:27.570 [2024-10-13 04:15:20.616889] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:27.570 [2024-10-13 04:15:20.616897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.570 [2024-10-13 04:15:20.616906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:27.570 [2024-10-13 04:15:20.616914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:24:27.570 [2024-10-13 04:15:20.616922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.617010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.570 [2024-10-13 04:15:20.617021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:27.570 [2024-10-13 04:15:20.617030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:27.570 [2024-10-13 04:15:20.617037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.570 [2024-10-13 04:15:20.617143] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:27.570 [2024-10-13 04:15:20.617156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:27.570 [2024-10-13 04:15:20.617165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:27.570 [2024-10-13 04:15:20.617190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:27.570 [2024-10-13 04:15:20.617213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.570 [2024-10-13 04:15:20.617231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:27.570 [2024-10-13 04:15:20.617238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:27.570 [2024-10-13 04:15:20.617245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.570 [2024-10-13 04:15:20.617253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:27.570 [2024-10-13 04:15:20.617261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:27.570 [2024-10-13 04:15:20.617268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:27.570 [2024-10-13 04:15:20.617289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:27.570 [2024-10-13 04:15:20.617309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:27.570 [2024-10-13 04:15:20.617328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:27.570 [2024-10-13 04:15:20.617348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:27.570 [2024-10-13 04:15:20.617368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:27.570 [2024-10-13 04:15:20.617387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.570 [2024-10-13 04:15:20.617399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:27.570 [2024-10-13 04:15:20.617405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:27.570 [2024-10-13 04:15:20.617412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.570 [2024-10-13 04:15:20.617419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:27.570 [2024-10-13 04:15:20.617426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:27.570 [2024-10-13 04:15:20.617433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:27.570 [2024-10-13 04:15:20.617446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:27.570 [2024-10-13 04:15:20.617453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:27.570 [2024-10-13 04:15:20.617469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:27.570 [2024-10-13 04:15:20.617477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.570 [2024-10-13 04:15:20.617492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:27.570 [2024-10-13 04:15:20.617500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:27.570 [2024-10-13 04:15:20.617509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:27.570 [2024-10-13 04:15:20.617515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:27.570 [2024-10-13 04:15:20.617522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:27.570 [2024-10-13 04:15:20.617529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:27.570 [2024-10-13 04:15:20.617538] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:27.570 [2024-10-13 04:15:20.617548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.570 [2024-10-13 04:15:20.617559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:27.570 [2024-10-13 04:15:20.617567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:27.570 [2024-10-13 04:15:20.617576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:27.570 [2024-10-13 04:15:20.617583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:27.570 [2024-10-13 04:15:20.617591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:27.570 [2024-10-13 04:15:20.617598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:27.570 [2024-10-13 04:15:20.617606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:27.570 [2024-10-13 04:15:20.617626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:27.570 [2024-10-13 04:15:20.617634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:27.571 [2024-10-13 04:15:20.617641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:27.571 [2024-10-13 04:15:20.617650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:27.571 [2024-10-13 04:15:20.617657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:27.571 [2024-10-13 04:15:20.617665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:27.571 [2024-10-13 04:15:20.617673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:27.571 [2024-10-13 04:15:20.617681] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:27.571 [2024-10-13 04:15:20.617690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.571 [2024-10-13 04:15:20.617698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:27.571 [2024-10-13 04:15:20.617706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:27.571 [2024-10-13 04:15:20.617713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:27.571 [2024-10-13 04:15:20.617722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:27.571 [2024-10-13 04:15:20.617730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.617738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:27.571 [2024-10-13 04:15:20.617746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:24:27.571 [2024-10-13 04:15:20.617753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.645524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.645571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.571 [2024-10-13 04:15:20.645583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.727 ms 00:24:27.571 [2024-10-13 04:15:20.645592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.645693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.645702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:27.571 [2024-10-13 04:15:20.645711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:27.571 [2024-10-13 04:15:20.645719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.693059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.693117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.571 [2024-10-13 04:15:20.693130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.273 ms 00:24:27.571 [2024-10-13 04:15:20.693139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.693188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.693198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.571 [2024-10-13 04:15:20.693213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:27.571 [2024-10-13 04:15:20.693221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.693332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.693346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.571 [2024-10-13 04:15:20.693355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:27.571 [2024-10-13 04:15:20.693363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.693492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.693505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.571 [2024-10-13 04:15:20.693515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:24:27.571 [2024-10-13 04:15:20.693526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.709132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.709181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.571 [2024-10-13 04:15:20.709196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.586 ms 00:24:27.571 [2024-10-13 04:15:20.709204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.709360] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:27.571 [2024-10-13 04:15:20.709375] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:27.571 [2024-10-13 04:15:20.709385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.709394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:27.571 [2024-10-13 04:15:20.709404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:27.571 [2024-10-13 04:15:20.709414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.721720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.721767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:27.571 [2024-10-13 04:15:20.721778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:24:27.571 [2024-10-13 04:15:20.721786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.721911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.721920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:27.571 [2024-10-13 04:15:20.721930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:27.571 [2024-10-13 04:15:20.721938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.721993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.722010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:27.571 [2024-10-13 04:15:20.722019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:27.571 [2024-10-13 04:15:20.722028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.722651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.722675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:27.571 [2024-10-13 04:15:20.722685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:24:27.571 [2024-10-13 04:15:20.722693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.571 [2024-10-13 04:15:20.722711] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:24:27.571 [2024-10-13 04:15:20.722722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.571 [2024-10-13 04:15:20.722735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:27.571 [2024-10-13 04:15:20.722743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:27.571 [2024-10-13 04:15:20.722751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.833 [2024-10-13 04:15:20.735473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:27.833 [2024-10-13 04:15:20.735656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.833 [2024-10-13 04:15:20.735668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:27.833 [2024-10-13 04:15:20.735680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.885 ms 00:24:27.833 [2024-10-13 04:15:20.735690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.833 [2024-10-13 04:15:20.737813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.833 [2024-10-13 04:15:20.737853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:27.833 [2024-10-13 04:15:20.737868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:24:27.833 [2024-10-13 04:15:20.737877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.833 [2024-10-13 04:15:20.737976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.833 [2024-10-13 04:15:20.737986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:27.834 [2024-10-13 04:15:20.737996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:27.834 [2024-10-13 04:15:20.738004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.834 [2024-10-13 04:15:20.738029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.834 [2024-10-13 04:15:20.738040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:27.834 [2024-10-13 04:15:20.738050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:27.834 [2024-10-13 04:15:20.738062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.834 [2024-10-13 04:15:20.738095] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:27.834 [2024-10-13 04:15:20.738105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.834 [2024-10-13 04:15:20.738113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:27.834 [2024-10-13 04:15:20.738121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:27.834 [2024-10-13 04:15:20.738129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.834 [2024-10-13 04:15:20.764884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.834 [2024-10-13 04:15:20.764939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:27.834 [2024-10-13 04:15:20.764959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.733 ms 00:24:27.834 [2024-10-13 04:15:20.764968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.834 [2024-10-13 04:15:20.765055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.834 [2024-10-13 04:15:20.765067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:27.834 [2024-10-13 04:15:20.765076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:27.834 [2024-10-13 04:15:20.765087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.834 [2024-10-13 04:15:20.766335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.223 ms, result 0 00:24:29.221  [2024-10-13T04:15:22.954Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-13T04:15:24.343Z] Copying: 24/1024 [MB] (13 MBps) [2024-10-13T04:15:25.288Z] Copying: 38/1024 [MB] (14 MBps) [2024-10-13T04:15:26.232Z] Copying: 51/1024 [MB] (12 MBps) [2024-10-13T04:15:27.166Z] Copying: 61/1024 [MB] (10 MBps) [2024-10-13T04:15:28.099Z] Copying: 77/1024 [MB] (16 MBps) [2024-10-13T04:15:29.034Z] Copying: 98/1024 [MB] (20 MBps) [2024-10-13T04:15:29.967Z] Copying: 124/1024 [MB] (25 MBps) [2024-10-13T04:15:31.343Z] Copying: 144/1024 [MB] (20 MBps) [2024-10-13T04:15:32.277Z] Copying: 168/1024 [MB] (24 MBps) [2024-10-13T04:15:33.211Z] Copying: 191/1024 [MB] (23 MBps) [2024-10-13T04:15:34.146Z] Copying: 217/1024 [MB] (25 MBps) [2024-10-13T04:15:35.081Z] Copying: 239/1024 [MB] (21 MBps) [2024-10-13T04:15:36.018Z] Copying: 263/1024 [MB] (24 MBps) [2024-10-13T04:15:36.994Z] Copying: 281/1024 [MB] (17 MBps) [2024-10-13T04:15:38.370Z] Copying: 308/1024 [MB] (26 MBps) [2024-10-13T04:15:39.309Z] Copying: 323/1024 [MB] (14 MBps) [2024-10-13T04:15:40.251Z] Copying: 343/1024 [MB] (20 MBps) [2024-10-13T04:15:41.193Z] Copying: 360/1024 [MB] (16 MBps) [2024-10-13T04:15:42.136Z] Copying: 372/1024 [MB] (12 MBps) [2024-10-13T04:15:43.079Z] Copying: 394/1024 [MB] (21 MBps) [2024-10-13T04:15:44.022Z] Copying: 414/1024 [MB] (20 MBps) [2024-10-13T04:15:44.965Z] Copying: 432/1024 [MB] (18 MBps) [2024-10-13T04:15:46.352Z] Copying: 448/1024 [MB] (15 MBps) [2024-10-13T04:15:47.295Z] Copying: 468/1024 [MB] (20 MBps) [2024-10-13T04:15:48.237Z] Copying: 484/1024 [MB] (15 MBps) [2024-10-13T04:15:49.180Z] Copying: 500/1024 [MB] (16 MBps) [2024-10-13T04:15:50.121Z] Copying: 512/1024 [MB] (11 MBps) [2024-10-13T04:15:51.062Z] Copying: 525/1024 [MB] (13 MBps) [2024-10-13T04:15:52.003Z] Copying: 536/1024 [MB] (11 MBps) [2024-10-13T04:15:53.396Z] Copying: 552/1024 [MB] (15 MBps) [2024-10-13T04:15:54.012Z] Copying: 572/1024 [MB] (19 MBps) [2024-10-13T04:15:54.951Z] Copying: 589/1024 [MB] (16 MBps) [2024-10-13T04:15:56.330Z] Copying: 609/1024 [MB] (20 MBps) [2024-10-13T04:15:57.271Z] Copying: 629/1024 [MB] (19 MBps) [2024-10-13T04:15:58.210Z] Copying: 653/1024 [MB] (24 MBps) [2024-10-13T04:15:59.143Z] Copying: 667/1024 [MB] (13 MBps) [2024-10-13T04:16:00.077Z] Copying: 678/1024 [MB] (11 MBps) [2024-10-13T04:16:01.011Z] Copying: 690/1024 [MB] (11 MBps) [2024-10-13T04:16:02.386Z] Copying: 702/1024 [MB] (12 MBps) [2024-10-13T04:16:02.957Z] Copying: 714/1024 [MB] (11 MBps) [2024-10-13T04:16:04.337Z] Copying: 725/1024 [MB] (11 MBps) [2024-10-13T04:16:05.277Z] Copying: 736/1024 [MB] (11 MBps) [2024-10-13T04:16:06.216Z] Copying: 747/1024 [MB] (10 MBps) [2024-10-13T04:16:07.180Z] Copying: 761/1024 [MB] (13 MBps) [2024-10-13T04:16:08.123Z] Copying: 772/1024 [MB] (10 MBps) [2024-10-13T04:16:09.064Z] Copying: 784/1024 [MB] (12 MBps) [2024-10-13T04:16:10.005Z] Copying: 795/1024 [MB] (11 MBps) [2024-10-13T04:16:10.991Z] Copying: 807/1024 [MB] (11 MBps) [2024-10-13T04:16:12.377Z] Copying: 818/1024 [MB] (11 MBps) [2024-10-13T04:16:13.319Z] Copying: 829/1024 [MB] (10 MBps) [2024-10-13T04:16:14.262Z] Copying: 852/1024 [MB] (23 MBps) [2024-10-13T04:16:15.206Z] Copying: 864/1024 [MB] (12 MBps) [2024-10-13T04:16:16.151Z] Copying: 880/1024 [MB] (15 MBps) [2024-10-13T04:16:17.095Z] Copying: 890/1024 [MB] (10 MBps) [2024-10-13T04:16:18.037Z] Copying: 901/1024 [MB] (10 MBps) [2024-10-13T04:16:18.972Z] Copying: 912/1024 [MB] (11 MBps) [2024-10-13T04:16:20.358Z] Copying: 923/1024 [MB] (11 MBps) [2024-10-13T04:16:21.302Z] Copying: 934/1024 [MB] (11 MBps) [2024-10-13T04:16:22.246Z] Copying: 952/1024 [MB] (18 MBps) [2024-10-13T04:16:23.189Z] Copying: 972/1024 [MB] (19 MBps) [2024-10-13T04:16:24.131Z] Copying: 998/1024 [MB] (26 MBps) [2024-10-13T04:16:24.392Z] Copying: 1018/1024 [MB] (19 MBps) [2024-10-13T04:16:24.655Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-13 04:16:24.633401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.495 [2024-10-13 04:16:24.633516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:31.495 [2024-10-13 04:16:24.633544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:31.495 [2024-10-13 04:16:24.633561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.495 [2024-10-13 04:16:24.633607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:31.495 [2024-10-13 04:16:24.638715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.495 [2024-10-13 04:16:24.638773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:31.495 [2024-10-13 04:16:24.638785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:25:31.495 [2024-10-13 04:16:24.638794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.495 [2024-10-13 04:16:24.639044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.495 [2024-10-13 04:16:24.639056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:31.495 [2024-10-13 04:16:24.639066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:25:31.495 [2024-10-13 04:16:24.639074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.495 [2024-10-13 04:16:24.639106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.495 [2024-10-13 04:16:24.639116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:31.495 [2024-10-13 04:16:24.639129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:31.495 [2024-10-13 04:16:24.639137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.495 [2024-10-13 04:16:24.639199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.495 [2024-10-13 04:16:24.639215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:31.495 [2024-10-13 04:16:24.639224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:31.495 [2024-10-13 04:16:24.639232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.495 [2024-10-13 04:16:24.639247] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:31.495 [2024-10-13 04:16:24.639261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:31.495 [2024-10-13 04:16:24.639551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.639707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.640962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.640973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.640983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.640992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:31.496 [2024-10-13 04:16:24.641356] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:31.496 [2024-10-13 04:16:24.641365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fec26d04-fef4-44ba-8cc2-76710abc8644 00:25:31.496 [2024-10-13 04:16:24.641373] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:31.496 [2024-10-13 04:16:24.641384] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:25:31.496 [2024-10-13 04:16:24.641392] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:31.496 [2024-10-13 04:16:24.641400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:31.496 [2024-10-13 04:16:24.641408] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:31.496 [2024-10-13 04:16:24.641416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:31.496 [2024-10-13 04:16:24.641424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:31.496 [2024-10-13 04:16:24.641431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:31.496 [2024-10-13 04:16:24.641438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:31.496 [2024-10-13 04:16:24.641449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.496 [2024-10-13 04:16:24.641457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:31.496 [2024-10-13 04:16:24.641467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:25:31.496 [2024-10-13 04:16:24.641636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.655404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.758 [2024-10-13 04:16:24.655454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:31.758 [2024-10-13 04:16:24.655467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.745 ms 00:25:31.758 [2024-10-13 04:16:24.655475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.655890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.758 [2024-10-13 04:16:24.655910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:31.758 [2024-10-13 04:16:24.655920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:25:31.758 [2024-10-13 04:16:24.655928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.692647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.692699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.758 [2024-10-13 04:16:24.692712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.692722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.692796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.692806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.758 [2024-10-13 04:16:24.692816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.692827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.692896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.692908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.758 [2024-10-13 04:16:24.692918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.692927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.692945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.692954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.758 [2024-10-13 04:16:24.692962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.692970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.778197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.778258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.758 [2024-10-13 04:16:24.778272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.778281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.847831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.847897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.758 [2024-10-13 04:16:24.847910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.847919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.848005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.848022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.758 [2024-10-13 04:16:24.848032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.848041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.848079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.848106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.758 [2024-10-13 04:16:24.848115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.848123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.848207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.848218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.758 [2024-10-13 04:16:24.848229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.848237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.848264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.848273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:31.758 [2024-10-13 04:16:24.848281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.848290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.848331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.848340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.758 [2024-10-13 04:16:24.848352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.848360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.848403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.758 [2024-10-13 04:16:24.848415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.758 [2024-10-13 04:16:24.848423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.758 [2024-10-13 04:16:24.848432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.758 [2024-10-13 04:16:24.848567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 215.149 ms, result 0 00:25:32.699 00:25:32.699 00:25:32.699 04:16:25 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:35.291 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:35.291 04:16:27 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:35.291 [2024-10-13 04:16:27.890970] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:25:35.291 [2024-10-13 04:16:27.891125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79870 ] 00:25:35.291 [2024-10-13 04:16:28.044910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.291 [2024-10-13 04:16:28.171357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.558 [2024-10-13 04:16:28.461093] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.558 [2024-10-13 04:16:28.461183] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:35.558 [2024-10-13 04:16:28.622587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.622674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:35.558 [2024-10-13 04:16:28.622691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.558 [2024-10-13 04:16:28.622705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.622762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.622774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:35.558 [2024-10-13 04:16:28.622782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:35.558 [2024-10-13 04:16:28.622793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.622814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:35.558 [2024-10-13 04:16:28.623719] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:35.558 [2024-10-13 04:16:28.623764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.623776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:35.558 [2024-10-13 04:16:28.623787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:25:35.558 [2024-10-13 04:16:28.623795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.624107] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:25:35.558 [2024-10-13 04:16:28.624135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.624144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:35.558 [2024-10-13 04:16:28.624154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:35.558 [2024-10-13 04:16:28.624165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.624220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.624238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:35.558 [2024-10-13 04:16:28.624247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:35.558 [2024-10-13 04:16:28.624255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.624526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.624538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:35.558 [2024-10-13 04:16:28.624550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:25:35.558 [2024-10-13 04:16:28.624557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.624697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.624710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:35.558 [2024-10-13 04:16:28.624719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:35.558 [2024-10-13 04:16:28.624727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.624751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.624760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:35.558 [2024-10-13 04:16:28.624768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:35.558 [2024-10-13 04:16:28.624775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.624799] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:35.558 [2024-10-13 04:16:28.629125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.629167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:35.558 [2024-10-13 04:16:28.629181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:25:35.558 [2024-10-13 04:16:28.629188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.629223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.629239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:35.558 [2024-10-13 04:16:28.629248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:35.558 [2024-10-13 04:16:28.629256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.629315] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:35.558 [2024-10-13 04:16:28.629337] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:35.558 [2024-10-13 04:16:28.629374] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:35.558 [2024-10-13 04:16:28.629393] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:35.558 [2024-10-13 04:16:28.629499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:35.558 [2024-10-13 04:16:28.629511] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:35.558 [2024-10-13 04:16:28.629522] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:35.558 [2024-10-13 04:16:28.629532] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:35.558 [2024-10-13 04:16:28.629541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:35.558 [2024-10-13 04:16:28.629549] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:35.558 [2024-10-13 04:16:28.629557] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:35.558 [2024-10-13 04:16:28.629568] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:35.558 [2024-10-13 04:16:28.629576] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:35.558 [2024-10-13 04:16:28.629584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.629592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:35.558 [2024-10-13 04:16:28.629599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:35.558 [2024-10-13 04:16:28.629606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.629708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.558 [2024-10-13 04:16:28.629719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:35.558 [2024-10-13 04:16:28.629727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:35.558 [2024-10-13 04:16:28.629735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.558 [2024-10-13 04:16:28.629841] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:35.558 [2024-10-13 04:16:28.629854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:35.558 [2024-10-13 04:16:28.629864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.558 [2024-10-13 04:16:28.629871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.558 [2024-10-13 04:16:28.629880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:35.558 [2024-10-13 04:16:28.629887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:35.558 [2024-10-13 04:16:28.629894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:35.558 [2024-10-13 04:16:28.629902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:35.558 [2024-10-13 04:16:28.629909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:35.558 [2024-10-13 04:16:28.629917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.558 [2024-10-13 04:16:28.629924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:35.558 [2024-10-13 04:16:28.629931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:35.558 [2024-10-13 04:16:28.629941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:35.558 [2024-10-13 04:16:28.629949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:35.558 [2024-10-13 04:16:28.629956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:35.558 [2024-10-13 04:16:28.629962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.558 [2024-10-13 04:16:28.629969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:35.558 [2024-10-13 04:16:28.629982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:35.558 [2024-10-13 04:16:28.629989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.558 [2024-10-13 04:16:28.629996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:35.558 [2024-10-13 04:16:28.630003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:35.558 [2024-10-13 04:16:28.630011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.558 [2024-10-13 04:16:28.630018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:35.558 [2024-10-13 04:16:28.630025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:35.558 [2024-10-13 04:16:28.630032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.558 [2024-10-13 04:16:28.630039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:35.558 [2024-10-13 04:16:28.630046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:35.558 [2024-10-13 04:16:28.630053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.558 [2024-10-13 04:16:28.630061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:35.558 [2024-10-13 04:16:28.630067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:35.558 [2024-10-13 04:16:28.630074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:35.558 [2024-10-13 04:16:28.630081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:35.559 [2024-10-13 04:16:28.630087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:35.559 [2024-10-13 04:16:28.630095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.559 [2024-10-13 04:16:28.630101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:35.559 [2024-10-13 04:16:28.630107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:35.559 [2024-10-13 04:16:28.630113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:35.559 [2024-10-13 04:16:28.630120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:35.559 [2024-10-13 04:16:28.630127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:35.559 [2024-10-13 04:16:28.630134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.559 [2024-10-13 04:16:28.630140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:35.559 [2024-10-13 04:16:28.630146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:35.559 [2024-10-13 04:16:28.630152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.559 [2024-10-13 04:16:28.630159] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:35.559 [2024-10-13 04:16:28.630168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:35.559 [2024-10-13 04:16:28.630176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:35.559 [2024-10-13 04:16:28.630183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:35.559 [2024-10-13 04:16:28.630191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:35.559 [2024-10-13 04:16:28.630199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:35.559 [2024-10-13 04:16:28.630206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:35.559 [2024-10-13 04:16:28.630212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:35.559 [2024-10-13 04:16:28.630219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:35.559 [2024-10-13 04:16:28.630226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:35.559 [2024-10-13 04:16:28.630235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:35.559 [2024-10-13 04:16:28.630244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.559 [2024-10-13 04:16:28.630256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:35.559 [2024-10-13 04:16:28.630264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:35.559 [2024-10-13 04:16:28.630271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:35.559 [2024-10-13 04:16:28.630278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:35.559 [2024-10-13 04:16:28.630285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:35.559 [2024-10-13 04:16:28.630292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:35.559 [2024-10-13 04:16:28.630300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:35.559 [2024-10-13 04:16:28.630307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:35.559 [2024-10-13 04:16:28.630314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:35.559 [2024-10-13 04:16:28.630322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:35.559 [2024-10-13 04:16:28.630329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:35.559 [2024-10-13 04:16:28.630336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:35.559 [2024-10-13 04:16:28.630343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:35.559 [2024-10-13 04:16:28.630352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:35.559 [2024-10-13 04:16:28.630359] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:35.559 [2024-10-13 04:16:28.630368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:35.559 [2024-10-13 04:16:28.630377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:35.559 [2024-10-13 04:16:28.630385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:35.559 [2024-10-13 04:16:28.630392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:35.559 [2024-10-13 04:16:28.630400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:35.559 [2024-10-13 04:16:28.630407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.559 [2024-10-13 04:16:28.630416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:35.559 [2024-10-13 04:16:28.630424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:25:35.559 [2024-10-13 04:16:28.630432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.559 [2024-10-13 04:16:28.658324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.559 [2024-10-13 04:16:28.658376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.559 [2024-10-13 04:16:28.658389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.851 ms 00:25:35.559 [2024-10-13 04:16:28.658397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.559 [2024-10-13 04:16:28.658482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.559 [2024-10-13 04:16:28.658491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.559 [2024-10-13 04:16:28.658500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:35.559 [2024-10-13 04:16:28.658508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.559 [2024-10-13 04:16:28.704364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.559 [2024-10-13 04:16:28.704424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.559 [2024-10-13 04:16:28.704438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.795 ms 00:25:35.559 [2024-10-13 04:16:28.704447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.559 [2024-10-13 04:16:28.704496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.559 [2024-10-13 04:16:28.704508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.559 [2024-10-13 04:16:28.704521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:35.559 [2024-10-13 04:16:28.704529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.559 [2024-10-13 04:16:28.704660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.559 [2024-10-13 04:16:28.704673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.559 [2024-10-13 04:16:28.704683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:35.559 [2024-10-13 04:16:28.704691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.559 [2024-10-13 04:16:28.704820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.559 [2024-10-13 04:16:28.704830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.559 [2024-10-13 04:16:28.704840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:25:35.559 [2024-10-13 04:16:28.704851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.720735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.720787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.821 [2024-10-13 04:16:28.720801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.862 ms 00:25:35.821 [2024-10-13 04:16:28.720809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.720967] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:35.821 [2024-10-13 04:16:28.720982] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:35.821 [2024-10-13 04:16:28.720993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.721000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:35.821 [2024-10-13 04:16:28.721010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:35.821 [2024-10-13 04:16:28.721020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.733469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.733522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:35.821 [2024-10-13 04:16:28.733534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.428 ms 00:25:35.821 [2024-10-13 04:16:28.733542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.733674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.733684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:35.821 [2024-10-13 04:16:28.733693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:25:35.821 [2024-10-13 04:16:28.733700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.733756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.733772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:35.821 [2024-10-13 04:16:28.733781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:35.821 [2024-10-13 04:16:28.733789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.734366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.734392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.821 [2024-10-13 04:16:28.734401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:25:35.821 [2024-10-13 04:16:28.734409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.734426] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:25:35.821 [2024-10-13 04:16:28.734436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.734447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:35.821 [2024-10-13 04:16:28.734455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:35.821 [2024-10-13 04:16:28.734462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.747131] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:35.821 [2024-10-13 04:16:28.747306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.747316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.821 [2024-10-13 04:16:28.747327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.824 ms 00:25:35.821 [2024-10-13 04:16:28.747335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.749532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.749571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:35.821 [2024-10-13 04:16:28.749584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:25:35.821 [2024-10-13 04:16:28.749592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.749702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.749713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.821 [2024-10-13 04:16:28.749724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:35.821 [2024-10-13 04:16:28.749731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.749755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.749763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.821 [2024-10-13 04:16:28.749772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:35.821 [2024-10-13 04:16:28.749784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.749815] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:35.821 [2024-10-13 04:16:28.749824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.749832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:35.821 [2024-10-13 04:16:28.749840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:35.821 [2024-10-13 04:16:28.749847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.777083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.777144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.821 [2024-10-13 04:16:28.777165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.215 ms 00:25:35.821 [2024-10-13 04:16:28.777173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.777265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.821 [2024-10-13 04:16:28.777275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.821 [2024-10-13 04:16:28.777286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:35.821 [2024-10-13 04:16:28.777294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.821 [2024-10-13 04:16:28.778482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.432 ms, result 0 00:25:36.765  [2024-10-13T04:16:30.869Z] Copying: 14/1024 [MB] (14 MBps) [2024-10-13T04:16:31.812Z] Copying: 27/1024 [MB] (12 MBps) [2024-10-13T04:16:33.198Z] Copying: 41/1024 [MB] (13 MBps) [2024-10-13T04:16:34.143Z] Copying: 56/1024 [MB] (15 MBps) [2024-10-13T04:16:35.087Z] Copying: 77/1024 [MB] (20 MBps) [2024-10-13T04:16:36.031Z] Copying: 131/1024 [MB] (53 MBps) [2024-10-13T04:16:36.975Z] Copying: 160/1024 [MB] (29 MBps) [2024-10-13T04:16:37.919Z] Copying: 170/1024 [MB] (10 MBps) [2024-10-13T04:16:38.858Z] Copying: 183/1024 [MB] (13 MBps) [2024-10-13T04:16:39.801Z] Copying: 201/1024 [MB] (17 MBps) [2024-10-13T04:16:41.188Z] Copying: 224/1024 [MB] (22 MBps) [2024-10-13T04:16:42.131Z] Copying: 241/1024 [MB] (16 MBps) [2024-10-13T04:16:43.075Z] Copying: 262/1024 [MB] (21 MBps) [2024-10-13T04:16:44.019Z] Copying: 276/1024 [MB] (13 MBps) [2024-10-13T04:16:44.963Z] Copying: 294/1024 [MB] (17 MBps) [2024-10-13T04:16:45.945Z] Copying: 307/1024 [MB] (13 MBps) [2024-10-13T04:16:46.888Z] Copying: 327/1024 [MB] (19 MBps) [2024-10-13T04:16:47.831Z] Copying: 346/1024 [MB] (19 MBps) [2024-10-13T04:16:49.230Z] Copying: 360/1024 [MB] (14 MBps) [2024-10-13T04:16:49.803Z] Copying: 378/1024 [MB] (17 MBps) [2024-10-13T04:16:51.191Z] Copying: 390/1024 [MB] (12 MBps) [2024-10-13T04:16:52.134Z] Copying: 400/1024 [MB] (10 MBps) [2024-10-13T04:16:53.078Z] Copying: 411/1024 [MB] (10 MBps) [2024-10-13T04:16:54.022Z] Copying: 421/1024 [MB] (10 MBps) [2024-10-13T04:16:54.965Z] Copying: 432/1024 [MB] (10 MBps) [2024-10-13T04:16:55.909Z] Copying: 445/1024 [MB] (13 MBps) [2024-10-13T04:16:56.851Z] Copying: 456/1024 [MB] (10 MBps) [2024-10-13T04:16:57.791Z] Copying: 467/1024 [MB] (10 MBps) [2024-10-13T04:16:59.176Z] Copying: 477/1024 [MB] (10 MBps) [2024-10-13T04:17:00.120Z] Copying: 488/1024 [MB] (10 MBps) [2024-10-13T04:17:01.063Z] Copying: 499/1024 [MB] (11 MBps) [2024-10-13T04:17:02.007Z] Copying: 510/1024 [MB] (10 MBps) [2024-10-13T04:17:02.957Z] Copying: 520/1024 [MB] (10 MBps) [2024-10-13T04:17:03.932Z] Copying: 531/1024 [MB] (10 MBps) [2024-10-13T04:17:04.876Z] Copying: 542/1024 [MB] (11 MBps) [2024-10-13T04:17:05.819Z] Copying: 554/1024 [MB] (11 MBps) [2024-10-13T04:17:07.205Z] Copying: 567/1024 [MB] (13 MBps) [2024-10-13T04:17:08.152Z] Copying: 584/1024 [MB] (17 MBps) [2024-10-13T04:17:09.095Z] Copying: 595/1024 [MB] (10 MBps) [2024-10-13T04:17:10.040Z] Copying: 607/1024 [MB] (11 MBps) [2024-10-13T04:17:10.984Z] Copying: 618/1024 [MB] (10 MBps) [2024-10-13T04:17:11.928Z] Copying: 634/1024 [MB] (16 MBps) [2024-10-13T04:17:12.872Z] Copying: 645/1024 [MB] (11 MBps) [2024-10-13T04:17:13.815Z] Copying: 662/1024 [MB] (16 MBps) [2024-10-13T04:17:15.202Z] Copying: 677/1024 [MB] (15 MBps) [2024-10-13T04:17:16.146Z] Copying: 695/1024 [MB] (18 MBps) [2024-10-13T04:17:17.089Z] Copying: 715/1024 [MB] (20 MBps) [2024-10-13T04:17:18.033Z] Copying: 726/1024 [MB] (10 MBps) [2024-10-13T04:17:18.974Z] Copying: 736/1024 [MB] (10 MBps) [2024-10-13T04:17:19.944Z] Copying: 747/1024 [MB] (10 MBps) [2024-10-13T04:17:20.897Z] Copying: 782/1024 [MB] (35 MBps) [2024-10-13T04:17:21.841Z] Copying: 802/1024 [MB] (20 MBps) [2024-10-13T04:17:23.227Z] Copying: 821/1024 [MB] (18 MBps) [2024-10-13T04:17:23.919Z] Copying: 833/1024 [MB] (11 MBps) [2024-10-13T04:17:24.863Z] Copying: 849/1024 [MB] (16 MBps) [2024-10-13T04:17:25.807Z] Copying: 870/1024 [MB] (20 MBps) [2024-10-13T04:17:27.194Z] Copying: 888/1024 [MB] (18 MBps) [2024-10-13T04:17:28.138Z] Copying: 905/1024 [MB] (16 MBps) [2024-10-13T04:17:29.079Z] Copying: 929/1024 [MB] (23 MBps) [2024-10-13T04:17:30.021Z] Copying: 948/1024 [MB] (19 MBps) [2024-10-13T04:17:30.965Z] Copying: 967/1024 [MB] (18 MBps) [2024-10-13T04:17:31.909Z] Copying: 981/1024 [MB] (13 MBps) [2024-10-13T04:17:32.853Z] Copying: 999/1024 [MB] (18 MBps) [2024-10-13T04:17:33.796Z] Copying: 1016/1024 [MB] (17 MBps) [2024-10-13T04:17:34.371Z] Copying: 1048200/1048576 [kB] (7288 kBps) [2024-10-13T04:17:34.371Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-13 04:17:34.209006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.211 [2024-10-13 04:17:34.209086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:41.211 [2024-10-13 04:17:34.209113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.211 [2024-10-13 04:17:34.209124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.211 [2024-10-13 04:17:34.212205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:41.211 [2024-10-13 04:17:34.217347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.211 [2024-10-13 04:17:34.217400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:41.211 [2024-10-13 04:17:34.217413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.087 ms 00:26:41.211 [2024-10-13 04:17:34.217422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.211 [2024-10-13 04:17:34.228352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.211 [2024-10-13 04:17:34.228403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:41.211 [2024-10-13 04:17:34.228423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.711 ms 00:26:41.211 [2024-10-13 04:17:34.228431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.211 [2024-10-13 04:17:34.228462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.211 [2024-10-13 04:17:34.228471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:41.211 [2024-10-13 04:17:34.228481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:41.211 [2024-10-13 04:17:34.228489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.211 [2024-10-13 04:17:34.228548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.211 [2024-10-13 04:17:34.228559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:41.211 [2024-10-13 04:17:34.228568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:41.211 [2024-10-13 04:17:34.228576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.211 [2024-10-13 04:17:34.228593] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:41.211 [2024-10-13 04:17:34.228606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:26:41.211 [2024-10-13 04:17:34.228631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.228999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:41.211 [2024-10-13 04:17:34.229205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:41.212 [2024-10-13 04:17:34.229417] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:41.212 [2024-10-13 04:17:34.229425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fec26d04-fef4-44ba-8cc2-76710abc8644 00:26:41.212 [2024-10-13 04:17:34.229434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:26:41.212 [2024-10-13 04:17:34.229441] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:26:41.212 [2024-10-13 04:17:34.229449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:26:41.212 [2024-10-13 04:17:34.229457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:26:41.212 [2024-10-13 04:17:34.229464] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:41.212 [2024-10-13 04:17:34.229473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:41.212 [2024-10-13 04:17:34.229480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:41.212 [2024-10-13 04:17:34.229487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:41.212 [2024-10-13 04:17:34.229494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:41.212 [2024-10-13 04:17:34.229500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.212 [2024-10-13 04:17:34.229514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:41.212 [2024-10-13 04:17:34.229523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:26:41.212 [2024-10-13 04:17:34.229530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.212 [2024-10-13 04:17:34.243231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.212 [2024-10-13 04:17:34.243283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:41.212 [2024-10-13 04:17:34.243297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.684 ms 00:26:41.212 [2024-10-13 04:17:34.243305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.212 [2024-10-13 04:17:34.243728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.212 [2024-10-13 04:17:34.243760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:41.212 [2024-10-13 04:17:34.243770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:26:41.212 [2024-10-13 04:17:34.243778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.212 [2024-10-13 04:17:34.280410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.212 [2024-10-13 04:17:34.280464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:41.212 [2024-10-13 04:17:34.280477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.212 [2024-10-13 04:17:34.280492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.212 [2024-10-13 04:17:34.280563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.212 [2024-10-13 04:17:34.280573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:41.212 [2024-10-13 04:17:34.280583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.212 [2024-10-13 04:17:34.280593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.212 [2024-10-13 04:17:34.280670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.212 [2024-10-13 04:17:34.280682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:41.212 [2024-10-13 04:17:34.280693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.212 [2024-10-13 04:17:34.280702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.212 [2024-10-13 04:17:34.280725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.212 [2024-10-13 04:17:34.280734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:41.212 [2024-10-13 04:17:34.280743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.212 [2024-10-13 04:17:34.280751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.212 [2024-10-13 04:17:34.366161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.212 [2024-10-13 04:17:34.366220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:41.212 [2024-10-13 04:17:34.366234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.212 [2024-10-13 04:17:34.366250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.472 [2024-10-13 04:17:34.436294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.473 [2024-10-13 04:17:34.436359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:41.473 [2024-10-13 04:17:34.436372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.473 [2024-10-13 04:17:34.436388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.473 [2024-10-13 04:17:34.436473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.473 [2024-10-13 04:17:34.436484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.473 [2024-10-13 04:17:34.436494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.473 [2024-10-13 04:17:34.436502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.473 [2024-10-13 04:17:34.436540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.473 [2024-10-13 04:17:34.436554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.473 [2024-10-13 04:17:34.436563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.473 [2024-10-13 04:17:34.436571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.473 [2024-10-13 04:17:34.436677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.473 [2024-10-13 04:17:34.436688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.473 [2024-10-13 04:17:34.436696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.473 [2024-10-13 04:17:34.436705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.473 [2024-10-13 04:17:34.436730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.473 [2024-10-13 04:17:34.436742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:41.473 [2024-10-13 04:17:34.436751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.473 [2024-10-13 04:17:34.436759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.473 [2024-10-13 04:17:34.436800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.473 [2024-10-13 04:17:34.436810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.473 [2024-10-13 04:17:34.436820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.473 [2024-10-13 04:17:34.436828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.473 [2024-10-13 04:17:34.436874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:41.473 [2024-10-13 04:17:34.436886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.473 [2024-10-13 04:17:34.436895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:41.473 [2024-10-13 04:17:34.436904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.473 [2024-10-13 04:17:34.437034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 230.911 ms, result 0 00:26:44.026 00:26:44.026 00:26:44.026 04:17:36 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:44.026 [2024-10-13 04:17:36.729900] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:26:44.026 [2024-10-13 04:17:36.730058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80559 ] 00:26:44.026 [2024-10-13 04:17:36.883219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.026 [2024-10-13 04:17:37.005806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.287 [2024-10-13 04:17:37.297712] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:44.287 [2024-10-13 04:17:37.297785] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:44.550 [2024-10-13 04:17:37.458496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.458565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:44.550 [2024-10-13 04:17:37.458580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:44.550 [2024-10-13 04:17:37.458593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.458663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.458675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.550 [2024-10-13 04:17:37.458684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:44.550 [2024-10-13 04:17:37.458695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.458716] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:44.550 [2024-10-13 04:17:37.459718] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:44.550 [2024-10-13 04:17:37.459770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.459784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.550 [2024-10-13 04:17:37.459796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:26:44.550 [2024-10-13 04:17:37.459805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.460290] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:44.550 [2024-10-13 04:17:37.460353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.460364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:44.550 [2024-10-13 04:17:37.460375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:44.550 [2024-10-13 04:17:37.460390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.460443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.460453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:44.550 [2024-10-13 04:17:37.460462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:44.550 [2024-10-13 04:17:37.460470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.460774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.460787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.550 [2024-10-13 04:17:37.460798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:26:44.550 [2024-10-13 04:17:37.460806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.460877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.460886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.550 [2024-10-13 04:17:37.460896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:44.550 [2024-10-13 04:17:37.460903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.460927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.460935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:44.550 [2024-10-13 04:17:37.460943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:44.550 [2024-10-13 04:17:37.460951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.550 [2024-10-13 04:17:37.460974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:44.550 [2024-10-13 04:17:37.465227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.550 [2024-10-13 04:17:37.465272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.550 [2024-10-13 04:17:37.465286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:26:44.550 [2024-10-13 04:17:37.465294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.551 [2024-10-13 04:17:37.465329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.551 [2024-10-13 04:17:37.465337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:44.551 [2024-10-13 04:17:37.465346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:44.551 [2024-10-13 04:17:37.465353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.551 [2024-10-13 04:17:37.465414] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:44.551 [2024-10-13 04:17:37.465437] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:44.551 [2024-10-13 04:17:37.465474] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:44.551 [2024-10-13 04:17:37.465493] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:44.551 [2024-10-13 04:17:37.465598] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:44.551 [2024-10-13 04:17:37.465609] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:44.551 [2024-10-13 04:17:37.465643] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:44.551 [2024-10-13 04:17:37.465653] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:44.551 [2024-10-13 04:17:37.465663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:44.551 [2024-10-13 04:17:37.465672] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:44.551 [2024-10-13 04:17:37.465680] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:44.551 [2024-10-13 04:17:37.465690] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:44.551 [2024-10-13 04:17:37.465698] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:44.551 [2024-10-13 04:17:37.465706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.551 [2024-10-13 04:17:37.465714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:44.551 [2024-10-13 04:17:37.465722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:26:44.551 [2024-10-13 04:17:37.465729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.551 [2024-10-13 04:17:37.465812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.551 [2024-10-13 04:17:37.465821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:44.551 [2024-10-13 04:17:37.465829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:44.551 [2024-10-13 04:17:37.465836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.551 [2024-10-13 04:17:37.465941] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:44.551 [2024-10-13 04:17:37.465961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:44.551 [2024-10-13 04:17:37.465970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.551 [2024-10-13 04:17:37.465978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.551 [2024-10-13 04:17:37.465986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:44.551 [2024-10-13 04:17:37.465993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:44.551 [2024-10-13 04:17:37.466015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.551 [2024-10-13 04:17:37.466032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:44.551 [2024-10-13 04:17:37.466040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:44.551 [2024-10-13 04:17:37.466047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.551 [2024-10-13 04:17:37.466055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:44.551 [2024-10-13 04:17:37.466062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:44.551 [2024-10-13 04:17:37.466068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:44.551 [2024-10-13 04:17:37.466088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:44.551 [2024-10-13 04:17:37.466109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:44.551 [2024-10-13 04:17:37.466130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:44.551 [2024-10-13 04:17:37.466149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:44.551 [2024-10-13 04:17:37.466169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:44.551 [2024-10-13 04:17:37.466189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.551 [2024-10-13 04:17:37.466202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:44.551 [2024-10-13 04:17:37.466208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:44.551 [2024-10-13 04:17:37.466215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.551 [2024-10-13 04:17:37.466221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:44.551 [2024-10-13 04:17:37.466228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:44.551 [2024-10-13 04:17:37.466234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:44.551 [2024-10-13 04:17:37.466249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:44.551 [2024-10-13 04:17:37.466256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466263] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:44.551 [2024-10-13 04:17:37.466271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:44.551 [2024-10-13 04:17:37.466279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.551 [2024-10-13 04:17:37.466295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:44.551 [2024-10-13 04:17:37.466302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:44.551 [2024-10-13 04:17:37.466309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:44.551 [2024-10-13 04:17:37.466318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:44.551 [2024-10-13 04:17:37.466324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:44.551 [2024-10-13 04:17:37.466331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:44.551 [2024-10-13 04:17:37.466340] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:44.551 [2024-10-13 04:17:37.466350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.551 [2024-10-13 04:17:37.466365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:44.551 [2024-10-13 04:17:37.466373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:44.551 [2024-10-13 04:17:37.466380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:44.551 [2024-10-13 04:17:37.466387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:44.551 [2024-10-13 04:17:37.466394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:44.551 [2024-10-13 04:17:37.466401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:44.551 [2024-10-13 04:17:37.466408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:44.551 [2024-10-13 04:17:37.466415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:44.551 [2024-10-13 04:17:37.466422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:44.551 [2024-10-13 04:17:37.466429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:44.551 [2024-10-13 04:17:37.466436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:44.551 [2024-10-13 04:17:37.466443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:44.551 [2024-10-13 04:17:37.466451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:44.551 [2024-10-13 04:17:37.466458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:44.551 [2024-10-13 04:17:37.466465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:44.551 [2024-10-13 04:17:37.466472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.551 [2024-10-13 04:17:37.466480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:44.551 [2024-10-13 04:17:37.466487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:44.551 [2024-10-13 04:17:37.466497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:44.551 [2024-10-13 04:17:37.466505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:44.551 [2024-10-13 04:17:37.466512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.551 [2024-10-13 04:17:37.466521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:44.551 [2024-10-13 04:17:37.466529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:26:44.552 [2024-10-13 04:17:37.466536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.494062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.494108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.552 [2024-10-13 04:17:37.494121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.482 ms 00:26:44.552 [2024-10-13 04:17:37.494129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.494210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.494219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:44.552 [2024-10-13 04:17:37.494228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:44.552 [2024-10-13 04:17:37.494236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.544839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.544896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:44.552 [2024-10-13 04:17:37.544910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.542 ms 00:26:44.552 [2024-10-13 04:17:37.544919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.544967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.544978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.552 [2024-10-13 04:17:37.544992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:44.552 [2024-10-13 04:17:37.545000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.545109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.545121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.552 [2024-10-13 04:17:37.545131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:44.552 [2024-10-13 04:17:37.545138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.545268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.545287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.552 [2024-10-13 04:17:37.545297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:26:44.552 [2024-10-13 04:17:37.545308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.560847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.560888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.552 [2024-10-13 04:17:37.560903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.518 ms 00:26:44.552 [2024-10-13 04:17:37.560911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.561063] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:44.552 [2024-10-13 04:17:37.561077] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:44.552 [2024-10-13 04:17:37.561087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.561095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:44.552 [2024-10-13 04:17:37.561103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:44.552 [2024-10-13 04:17:37.561113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.573562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.573602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:44.552 [2024-10-13 04:17:37.573621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.428 ms 00:26:44.552 [2024-10-13 04:17:37.573630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.573752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.573763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:44.552 [2024-10-13 04:17:37.573772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:26:44.552 [2024-10-13 04:17:37.573779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.573832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.573848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:44.552 [2024-10-13 04:17:37.573857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:26:44.552 [2024-10-13 04:17:37.573865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.574448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.574471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:44.552 [2024-10-13 04:17:37.574481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:26:44.552 [2024-10-13 04:17:37.574489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.574506] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:44.552 [2024-10-13 04:17:37.574516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.574528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:44.552 [2024-10-13 04:17:37.574536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:44.552 [2024-10-13 04:17:37.574544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.587000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:44.552 [2024-10-13 04:17:37.587162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.587173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:44.552 [2024-10-13 04:17:37.587184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.599 ms 00:26:44.552 [2024-10-13 04:17:37.587192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.589535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.589573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:44.552 [2024-10-13 04:17:37.589586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.319 ms 00:26:44.552 [2024-10-13 04:17:37.589593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.589679] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:26:44.552 [2024-10-13 04:17:37.590132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.590149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:44.552 [2024-10-13 04:17:37.590159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:26:44.552 [2024-10-13 04:17:37.590166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.590192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.590201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:44.552 [2024-10-13 04:17:37.590214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:44.552 [2024-10-13 04:17:37.590222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.590254] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:44.552 [2024-10-13 04:17:37.590264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.590272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:44.552 [2024-10-13 04:17:37.590280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:44.552 [2024-10-13 04:17:37.590288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.616419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.616480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:44.552 [2024-10-13 04:17:37.616492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.115 ms 00:26:44.552 [2024-10-13 04:17:37.616500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.616587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.552 [2024-10-13 04:17:37.616596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:44.552 [2024-10-13 04:17:37.616606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:44.552 [2024-10-13 04:17:37.616627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.552 [2024-10-13 04:17:37.617947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.945 ms, result 0 00:26:45.937  [2024-10-13T04:17:40.042Z] Copying: 13/1024 [MB] (13 MBps) [2024-10-13T04:17:40.985Z] Copying: 24/1024 [MB] (10 MBps) [2024-10-13T04:17:41.927Z] Copying: 40/1024 [MB] (15 MBps) [2024-10-13T04:17:42.871Z] Copying: 58/1024 [MB] (18 MBps) [2024-10-13T04:17:43.814Z] Copying: 81/1024 [MB] (22 MBps) [2024-10-13T04:17:45.201Z] Copying: 93/1024 [MB] (11 MBps) [2024-10-13T04:17:46.144Z] Copying: 109/1024 [MB] (16 MBps) [2024-10-13T04:17:47.087Z] Copying: 132/1024 [MB] (22 MBps) [2024-10-13T04:17:48.032Z] Copying: 151/1024 [MB] (19 MBps) [2024-10-13T04:17:48.974Z] Copying: 171/1024 [MB] (19 MBps) [2024-10-13T04:17:49.918Z] Copying: 194/1024 [MB] (22 MBps) [2024-10-13T04:17:50.862Z] Copying: 213/1024 [MB] (19 MBps) [2024-10-13T04:17:52.248Z] Copying: 230/1024 [MB] (17 MBps) [2024-10-13T04:17:52.822Z] Copying: 243/1024 [MB] (12 MBps) [2024-10-13T04:17:54.211Z] Copying: 253/1024 [MB] (10 MBps) [2024-10-13T04:17:55.195Z] Copying: 264/1024 [MB] (10 MBps) [2024-10-13T04:17:56.167Z] Copying: 284/1024 [MB] (20 MBps) [2024-10-13T04:17:57.110Z] Copying: 297/1024 [MB] (12 MBps) [2024-10-13T04:17:58.053Z] Copying: 308/1024 [MB] (10 MBps) [2024-10-13T04:17:58.996Z] Copying: 318/1024 [MB] (10 MBps) [2024-10-13T04:17:59.941Z] Copying: 329/1024 [MB] (10 MBps) [2024-10-13T04:18:00.881Z] Copying: 340/1024 [MB] (10 MBps) [2024-10-13T04:18:01.824Z] Copying: 352/1024 [MB] (12 MBps) [2024-10-13T04:18:03.214Z] Copying: 369/1024 [MB] (16 MBps) [2024-10-13T04:18:04.159Z] Copying: 388/1024 [MB] (19 MBps) [2024-10-13T04:18:05.102Z] Copying: 410/1024 [MB] (21 MBps) [2024-10-13T04:18:06.046Z] Copying: 421/1024 [MB] (11 MBps) [2024-10-13T04:18:06.990Z] Copying: 432/1024 [MB] (10 MBps) [2024-10-13T04:18:07.935Z] Copying: 442/1024 [MB] (10 MBps) [2024-10-13T04:18:08.876Z] Copying: 453/1024 [MB] (10 MBps) [2024-10-13T04:18:09.821Z] Copying: 464/1024 [MB] (10 MBps) [2024-10-13T04:18:11.249Z] Copying: 481/1024 [MB] (16 MBps) [2024-10-13T04:18:11.821Z] Copying: 501/1024 [MB] (19 MBps) [2024-10-13T04:18:13.208Z] Copying: 521/1024 [MB] (20 MBps) [2024-10-13T04:18:14.152Z] Copying: 543/1024 [MB] (21 MBps) [2024-10-13T04:18:15.095Z] Copying: 558/1024 [MB] (15 MBps) [2024-10-13T04:18:16.039Z] Copying: 569/1024 [MB] (11 MBps) [2024-10-13T04:18:16.982Z] Copying: 583/1024 [MB] (14 MBps) [2024-10-13T04:18:17.926Z] Copying: 597/1024 [MB] (13 MBps) [2024-10-13T04:18:18.866Z] Copying: 617/1024 [MB] (20 MBps) [2024-10-13T04:18:20.252Z] Copying: 628/1024 [MB] (10 MBps) [2024-10-13T04:18:20.825Z] Copying: 638/1024 [MB] (10 MBps) [2024-10-13T04:18:22.211Z] Copying: 665/1024 [MB] (26 MBps) [2024-10-13T04:18:23.154Z] Copying: 683/1024 [MB] (17 MBps) [2024-10-13T04:18:24.098Z] Copying: 699/1024 [MB] (16 MBps) [2024-10-13T04:18:25.041Z] Copying: 724/1024 [MB] (25 MBps) [2024-10-13T04:18:25.992Z] Copying: 740/1024 [MB] (15 MBps) [2024-10-13T04:18:26.935Z] Copying: 759/1024 [MB] (18 MBps) [2024-10-13T04:18:27.879Z] Copying: 782/1024 [MB] (23 MBps) [2024-10-13T04:18:28.836Z] Copying: 804/1024 [MB] (21 MBps) [2024-10-13T04:18:30.221Z] Copying: 820/1024 [MB] (16 MBps) [2024-10-13T04:18:31.164Z] Copying: 837/1024 [MB] (17 MBps) [2024-10-13T04:18:32.109Z] Copying: 853/1024 [MB] (15 MBps) [2024-10-13T04:18:33.090Z] Copying: 873/1024 [MB] (19 MBps) [2024-10-13T04:18:34.034Z] Copying: 894/1024 [MB] (21 MBps) [2024-10-13T04:18:34.977Z] Copying: 916/1024 [MB] (21 MBps) [2024-10-13T04:18:35.918Z] Copying: 931/1024 [MB] (14 MBps) [2024-10-13T04:18:36.861Z] Copying: 941/1024 [MB] (10 MBps) [2024-10-13T04:18:38.247Z] Copying: 964/1024 [MB] (23 MBps) [2024-10-13T04:18:38.817Z] Copying: 984/1024 [MB] (19 MBps) [2024-10-13T04:18:40.203Z] Copying: 999/1024 [MB] (14 MBps) [2024-10-13T04:18:40.464Z] Copying: 1016/1024 [MB] (16 MBps) [2024-10-13T04:18:40.726Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-13 04:18:40.626385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.567 [2024-10-13 04:18:40.626489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:47.567 [2024-10-13 04:18:40.626508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:47.567 [2024-10-13 04:18:40.626519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.567 [2024-10-13 04:18:40.626549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:47.567 [2024-10-13 04:18:40.630723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.567 [2024-10-13 04:18:40.630776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:47.567 [2024-10-13 04:18:40.630794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.154 ms 00:27:47.567 [2024-10-13 04:18:40.630808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.567 [2024-10-13 04:18:40.631175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.567 [2024-10-13 04:18:40.631191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:47.567 [2024-10-13 04:18:40.631211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:27:47.567 [2024-10-13 04:18:40.631223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.567 [2024-10-13 04:18:40.631267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.567 [2024-10-13 04:18:40.631281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:47.567 [2024-10-13 04:18:40.631295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:47.567 [2024-10-13 04:18:40.631307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.567 [2024-10-13 04:18:40.631381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.567 [2024-10-13 04:18:40.631394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:47.567 [2024-10-13 04:18:40.631407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:47.567 [2024-10-13 04:18:40.631427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.567 [2024-10-13 04:18:40.631448] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:47.567 [2024-10-13 04:18:40.631466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:47.567 [2024-10-13 04:18:40.631481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.631988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:47.567 [2024-10-13 04:18:40.632444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:47.568 [2024-10-13 04:18:40.632767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:47.568 [2024-10-13 04:18:40.632788] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fec26d04-fef4-44ba-8cc2-76710abc8644 00:27:47.568 [2024-10-13 04:18:40.632807] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:47.568 [2024-10-13 04:18:40.632819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:27:47.568 [2024-10-13 04:18:40.632830] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:27:47.568 [2024-10-13 04:18:40.632844] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:27:47.568 [2024-10-13 04:18:40.632855] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:47.568 [2024-10-13 04:18:40.632868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:47.568 [2024-10-13 04:18:40.632884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:47.568 [2024-10-13 04:18:40.632895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:47.568 [2024-10-13 04:18:40.632906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:47.568 [2024-10-13 04:18:40.632917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.568 [2024-10-13 04:18:40.632930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:47.568 [2024-10-13 04:18:40.632942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:27:47.568 [2024-10-13 04:18:40.632954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.568 [2024-10-13 04:18:40.649226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.568 [2024-10-13 04:18:40.649268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:47.568 [2024-10-13 04:18:40.649282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.248 ms 00:27:47.568 [2024-10-13 04:18:40.649291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.568 [2024-10-13 04:18:40.649710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.568 [2024-10-13 04:18:40.649728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:47.568 [2024-10-13 04:18:40.649739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:27:47.568 [2024-10-13 04:18:40.649747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.568 [2024-10-13 04:18:40.686161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.568 [2024-10-13 04:18:40.686204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:47.568 [2024-10-13 04:18:40.686220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.568 [2024-10-13 04:18:40.686229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.568 [2024-10-13 04:18:40.686302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.568 [2024-10-13 04:18:40.686312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:47.568 [2024-10-13 04:18:40.686321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.568 [2024-10-13 04:18:40.686331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.568 [2024-10-13 04:18:40.686396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.568 [2024-10-13 04:18:40.686407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:47.568 [2024-10-13 04:18:40.686417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.568 [2024-10-13 04:18:40.686434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.568 [2024-10-13 04:18:40.686452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.568 [2024-10-13 04:18:40.686461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:47.568 [2024-10-13 04:18:40.686471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.568 [2024-10-13 04:18:40.686480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.771163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.771226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:47.829 [2024-10-13 04:18:40.771241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.771255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.840945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.841000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:47.829 [2024-10-13 04:18:40.841020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.841029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.841136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.841147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:47.829 [2024-10-13 04:18:40.841157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.841168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.841212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.841221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:47.829 [2024-10-13 04:18:40.841230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.841238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.841316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.841326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:47.829 [2024-10-13 04:18:40.841335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.841343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.841372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.841384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:47.829 [2024-10-13 04:18:40.841393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.841401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.841442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.841451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:47.829 [2024-10-13 04:18:40.841459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.841471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.841519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:47.829 [2024-10-13 04:18:40.841530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:47.829 [2024-10-13 04:18:40.841538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:47.829 [2024-10-13 04:18:40.841546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.829 [2024-10-13 04:18:40.841701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 215.287 ms, result 0 00:27:48.773 00:27:48.773 00:27:48.773 04:18:41 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:50.689 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 78425 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78425 ']' 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78425 00:27:50.689 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78425) - No such process 00:27:50.689 Process with pid 78425 is not found 00:27:50.689 Remove shared memory files 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 78425 is not found' 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_band_md /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_l2p_l1 /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_l2p_l2 /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_l2p_l2_ctx /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_nvc_md /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_p2l_pool /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_sb /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_sb_shm /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_trim_bitmap /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_trim_log /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_trim_md /dev/hugepages/ftl_fec26d04-fef4-44ba-8cc2-76710abc8644_vmap 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:27:50.689 00:27:50.689 real 4m35.890s 00:27:50.689 user 4m23.468s 00:27:50.689 sys 0m11.964s 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.689 ************************************ 00:27:50.689 END TEST ftl_restore_fast 00:27:50.689 ************************************ 00:27:50.689 04:18:43 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:50.950 04:18:43 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:50.950 04:18:43 ftl -- ftl/ftl.sh@14 -- # killprocess 72547 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@950 -- # '[' -z 72547 ']' 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@954 -- # kill -0 72547 00:27:50.950 Process with pid 72547 is not found 00:27:50.950 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72547) - No such process 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72547 is not found' 00:27:50.950 04:18:43 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:27:50.950 04:18:43 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81270 00:27:50.950 04:18:43 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81270 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@831 -- # '[' -z 81270 ']' 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.950 04:18:43 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.950 04:18:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:50.951 [2024-10-13 04:18:43.955791] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 24.03.0 initialization... 00:27:50.951 [2024-10-13 04:18:43.955924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81270 ] 00:27:50.951 [2024-10-13 04:18:44.108504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.211 [2024-10-13 04:18:44.227012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.784 04:18:44 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.784 04:18:44 ftl -- common/autotest_common.sh@864 -- # return 0 00:27:51.784 04:18:44 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:52.045 nvme0n1 00:27:52.307 04:18:45 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:27:52.307 04:18:45 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:52.307 04:18:45 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:52.307 04:18:45 ftl -- ftl/common.sh@28 -- # stores=23fcb54c-15e7-421d-8ef0-9cc765ef80de 00:27:52.307 04:18:45 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:27:52.307 04:18:45 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23fcb54c-15e7-421d-8ef0-9cc765ef80de 00:27:52.568 04:18:45 ftl -- ftl/ftl.sh@23 -- # killprocess 81270 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@950 -- # '[' -z 81270 ']' 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@954 -- # kill -0 81270 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@955 -- # uname 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81270 00:27:52.568 killing process with pid 81270 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81270' 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@969 -- # kill 81270 00:27:52.568 04:18:45 ftl -- common/autotest_common.sh@974 -- # wait 81270 00:27:53.955 04:18:47 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:54.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:54.216 Waiting for block devices as requested 00:27:54.216 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:54.477 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:54.477 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:54.477 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:59.769 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:59.769 Remove shared memory files 00:27:59.769 04:18:52 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:27:59.769 04:18:52 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:59.769 04:18:52 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:27:59.769 04:18:52 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:27:59.769 04:18:52 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:27:59.769 04:18:52 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:59.769 04:18:52 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:27:59.769 ************************************ 00:27:59.769 END TEST ftl 00:27:59.769 ************************************ 00:27:59.769 00:27:59.769 real 12m48.130s 00:27:59.769 user 14m47.288s 00:27:59.769 sys 1m12.266s 00:27:59.769 04:18:52 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:59.769 04:18:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:59.769 04:18:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:59.769 04:18:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:59.769 04:18:52 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:59.769 04:18:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:59.769 04:18:52 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:27:59.769 04:18:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:59.769 04:18:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:59.769 04:18:52 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:27:59.769 04:18:52 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:27:59.769 04:18:52 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:27:59.769 04:18:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.769 04:18:52 -- common/autotest_common.sh@10 -- # set +x 00:27:59.769 04:18:52 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:27:59.769 04:18:52 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:59.769 04:18:52 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:59.769 04:18:52 -- common/autotest_common.sh@10 -- # set +x 00:28:01.157 INFO: APP EXITING 00:28:01.157 INFO: killing all VMs 00:28:01.157 INFO: killing vhost app 00:28:01.157 INFO: EXIT DONE 00:28:01.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:01.718 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:01.718 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:01.718 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:01.979 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:02.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:02.502 Cleaning 00:28:02.502 Removing: /var/run/dpdk/spdk0/config 00:28:02.502 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:02.502 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:02.502 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:02.502 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:02.502 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:02.502 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:02.502 Removing: /var/run/dpdk/spdk0 00:28:02.502 Removing: /var/run/dpdk/spdk_pid57310 00:28:02.502 Removing: /var/run/dpdk/spdk_pid57502 00:28:02.502 Removing: /var/run/dpdk/spdk_pid57714 00:28:02.502 Removing: /var/run/dpdk/spdk_pid57807 00:28:02.502 Removing: /var/run/dpdk/spdk_pid57847 00:28:02.502 Removing: /var/run/dpdk/spdk_pid57964 00:28:02.502 Removing: /var/run/dpdk/spdk_pid57982 00:28:02.502 Removing: /var/run/dpdk/spdk_pid58175 00:28:02.502 Removing: /var/run/dpdk/spdk_pid58268 00:28:02.502 Removing: /var/run/dpdk/spdk_pid58359 00:28:02.502 Removing: /var/run/dpdk/spdk_pid58470 00:28:02.502 Removing: /var/run/dpdk/spdk_pid58556 00:28:02.502 Removing: /var/run/dpdk/spdk_pid58595 00:28:02.763 Removing: /var/run/dpdk/spdk_pid58632 00:28:02.763 Removing: /var/run/dpdk/spdk_pid58702 00:28:02.763 Removing: /var/run/dpdk/spdk_pid58781 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59217 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59270 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59322 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59338 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59429 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59445 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59536 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59552 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59605 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59623 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59676 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59694 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59843 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59880 00:28:02.763 Removing: /var/run/dpdk/spdk_pid59963 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60141 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60219 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60256 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60689 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60787 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60896 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60949 00:28:02.763 Removing: /var/run/dpdk/spdk_pid60979 00:28:02.763 Removing: /var/run/dpdk/spdk_pid61053 00:28:02.763 Removing: /var/run/dpdk/spdk_pid61684 00:28:02.763 Removing: /var/run/dpdk/spdk_pid61726 00:28:02.763 Removing: /var/run/dpdk/spdk_pid62198 00:28:02.763 Removing: /var/run/dpdk/spdk_pid62296 00:28:02.763 Removing: /var/run/dpdk/spdk_pid62411 00:28:02.763 Removing: /var/run/dpdk/spdk_pid62465 00:28:02.763 Removing: /var/run/dpdk/spdk_pid62485 00:28:02.763 Removing: /var/run/dpdk/spdk_pid62516 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64355 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64481 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64485 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64502 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64548 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64552 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64564 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64609 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64613 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64625 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64670 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64674 00:28:02.763 Removing: /var/run/dpdk/spdk_pid64686 00:28:02.763 Removing: /var/run/dpdk/spdk_pid66051 00:28:02.763 Removing: /var/run/dpdk/spdk_pid66148 00:28:02.763 Removing: /var/run/dpdk/spdk_pid67550 00:28:02.763 Removing: /var/run/dpdk/spdk_pid68909 00:28:02.763 Removing: /var/run/dpdk/spdk_pid68996 00:28:02.763 Removing: /var/run/dpdk/spdk_pid69078 00:28:02.763 Removing: /var/run/dpdk/spdk_pid69154 00:28:02.763 Removing: /var/run/dpdk/spdk_pid69252 00:28:02.763 Removing: /var/run/dpdk/spdk_pid69321 00:28:02.763 Removing: /var/run/dpdk/spdk_pid69463 00:28:02.763 Removing: /var/run/dpdk/spdk_pid69822 00:28:02.763 Removing: /var/run/dpdk/spdk_pid69853 00:28:02.763 Removing: /var/run/dpdk/spdk_pid70284 00:28:02.763 Removing: /var/run/dpdk/spdk_pid70472 00:28:02.763 Removing: /var/run/dpdk/spdk_pid70576 00:28:02.763 Removing: /var/run/dpdk/spdk_pid70687 00:28:02.763 Removing: /var/run/dpdk/spdk_pid70730 00:28:02.763 Removing: /var/run/dpdk/spdk_pid70761 00:28:02.763 Removing: /var/run/dpdk/spdk_pid71074 00:28:02.763 Removing: /var/run/dpdk/spdk_pid71129 00:28:02.763 Removing: /var/run/dpdk/spdk_pid71202 00:28:02.763 Removing: /var/run/dpdk/spdk_pid71591 00:28:02.763 Removing: /var/run/dpdk/spdk_pid71742 00:28:02.763 Removing: /var/run/dpdk/spdk_pid72547 00:28:02.763 Removing: /var/run/dpdk/spdk_pid72679 00:28:02.763 Removing: /var/run/dpdk/spdk_pid72837 00:28:02.763 Removing: /var/run/dpdk/spdk_pid72934 00:28:02.763 Removing: /var/run/dpdk/spdk_pid73233 00:28:02.763 Removing: /var/run/dpdk/spdk_pid73475 00:28:02.763 Removing: /var/run/dpdk/spdk_pid73809 00:28:02.763 Removing: /var/run/dpdk/spdk_pid73994 00:28:02.763 Removing: /var/run/dpdk/spdk_pid74136 00:28:02.763 Removing: /var/run/dpdk/spdk_pid74189 00:28:02.763 Removing: /var/run/dpdk/spdk_pid74315 00:28:02.763 Removing: /var/run/dpdk/spdk_pid74340 00:28:02.763 Removing: /var/run/dpdk/spdk_pid74393 00:28:02.763 Removing: /var/run/dpdk/spdk_pid74569 00:28:02.763 Removing: /var/run/dpdk/spdk_pid74777 00:28:02.763 Removing: /var/run/dpdk/spdk_pid75040 00:28:02.763 Removing: /var/run/dpdk/spdk_pid75313 00:28:02.763 Removing: /var/run/dpdk/spdk_pid75576 00:28:02.763 Removing: /var/run/dpdk/spdk_pid75909 00:28:02.763 Removing: /var/run/dpdk/spdk_pid76040 00:28:02.763 Removing: /var/run/dpdk/spdk_pid76119 00:28:02.763 Removing: /var/run/dpdk/spdk_pid76477 00:28:03.025 Removing: /var/run/dpdk/spdk_pid76534 00:28:03.025 Removing: /var/run/dpdk/spdk_pid76837 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77123 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77459 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77573 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77615 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77677 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77727 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77789 00:28:03.025 Removing: /var/run/dpdk/spdk_pid77959 00:28:03.025 Removing: /var/run/dpdk/spdk_pid78029 00:28:03.025 Removing: /var/run/dpdk/spdk_pid78096 00:28:03.025 Removing: /var/run/dpdk/spdk_pid78146 00:28:03.025 Removing: /var/run/dpdk/spdk_pid78186 00:28:03.025 Removing: /var/run/dpdk/spdk_pid78275 00:28:03.025 Removing: /var/run/dpdk/spdk_pid78425 00:28:03.025 Removing: /var/run/dpdk/spdk_pid78628 00:28:03.025 Removing: /var/run/dpdk/spdk_pid79165 00:28:03.025 Removing: /var/run/dpdk/spdk_pid79870 00:28:03.025 Removing: /var/run/dpdk/spdk_pid80559 00:28:03.025 Removing: /var/run/dpdk/spdk_pid81270 00:28:03.025 Clean 00:28:03.025 04:18:56 -- common/autotest_common.sh@1451 -- # return 0 00:28:03.025 04:18:56 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:03.025 04:18:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:03.025 04:18:56 -- common/autotest_common.sh@10 -- # set +x 00:28:03.025 04:18:56 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:03.025 04:18:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:03.025 04:18:56 -- common/autotest_common.sh@10 -- # set +x 00:28:03.025 04:18:56 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:03.025 04:18:56 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:03.025 04:18:56 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:03.025 04:18:56 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:03.025 04:18:56 -- spdk/autotest.sh@394 -- # hostname 00:28:03.025 04:18:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:03.285 geninfo: WARNING: invalid characters removed from testname! 00:28:29.871 04:19:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:30.815 04:19:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:33.363 04:19:26 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:35.909 04:19:29 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:39.214 04:19:31 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:41.761 04:19:34 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:44.309 04:19:37 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:44.309 04:19:37 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:28:44.309 04:19:37 -- common/autotest_common.sh@1691 -- $ lcov --version 00:28:44.309 04:19:37 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:28:44.309 04:19:37 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:28:44.309 04:19:37 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:28:44.309 04:19:37 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:28:44.309 04:19:37 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:28:44.309 04:19:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:44.309 04:19:37 -- scripts/common.sh@336 -- $ read -ra ver1 00:28:44.309 04:19:37 -- scripts/common.sh@337 -- $ IFS=.-: 00:28:44.309 04:19:37 -- scripts/common.sh@337 -- $ read -ra ver2 00:28:44.309 04:19:37 -- scripts/common.sh@338 -- $ local 'op=<' 00:28:44.309 04:19:37 -- scripts/common.sh@340 -- $ ver1_l=2 00:28:44.309 04:19:37 -- scripts/common.sh@341 -- $ ver2_l=1 00:28:44.309 04:19:37 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:28:44.309 04:19:37 -- scripts/common.sh@344 -- $ case "$op" in 00:28:44.309 04:19:37 -- scripts/common.sh@345 -- $ : 1 00:28:44.309 04:19:37 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:28:44.309 04:19:37 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.309 04:19:37 -- scripts/common.sh@365 -- $ decimal 1 00:28:44.309 04:19:37 -- scripts/common.sh@353 -- $ local d=1 00:28:44.309 04:19:37 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:44.309 04:19:37 -- scripts/common.sh@355 -- $ echo 1 00:28:44.309 04:19:37 -- scripts/common.sh@365 -- $ ver1[v]=1 00:28:44.309 04:19:37 -- scripts/common.sh@366 -- $ decimal 2 00:28:44.309 04:19:37 -- scripts/common.sh@353 -- $ local d=2 00:28:44.309 04:19:37 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:44.309 04:19:37 -- scripts/common.sh@355 -- $ echo 2 00:28:44.309 04:19:37 -- scripts/common.sh@366 -- $ ver2[v]=2 00:28:44.309 04:19:37 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:28:44.309 04:19:37 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:28:44.309 04:19:37 -- scripts/common.sh@368 -- $ return 0 00:28:44.309 04:19:37 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.309 04:19:37 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:28:44.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.309 --rc genhtml_branch_coverage=1 00:28:44.309 --rc genhtml_function_coverage=1 00:28:44.309 --rc genhtml_legend=1 00:28:44.309 --rc geninfo_all_blocks=1 00:28:44.309 --rc geninfo_unexecuted_blocks=1 00:28:44.309 00:28:44.309 ' 00:28:44.309 04:19:37 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:28:44.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.309 --rc genhtml_branch_coverage=1 00:28:44.309 --rc genhtml_function_coverage=1 00:28:44.309 --rc genhtml_legend=1 00:28:44.309 --rc geninfo_all_blocks=1 00:28:44.309 --rc geninfo_unexecuted_blocks=1 00:28:44.309 00:28:44.309 ' 00:28:44.309 04:19:37 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:28:44.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.309 --rc genhtml_branch_coverage=1 00:28:44.309 --rc genhtml_function_coverage=1 00:28:44.309 --rc genhtml_legend=1 00:28:44.309 --rc geninfo_all_blocks=1 00:28:44.309 --rc geninfo_unexecuted_blocks=1 00:28:44.309 00:28:44.309 ' 00:28:44.309 04:19:37 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:28:44.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.309 --rc genhtml_branch_coverage=1 00:28:44.309 --rc genhtml_function_coverage=1 00:28:44.309 --rc genhtml_legend=1 00:28:44.309 --rc geninfo_all_blocks=1 00:28:44.309 --rc geninfo_unexecuted_blocks=1 00:28:44.309 00:28:44.309 ' 00:28:44.309 04:19:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:44.309 04:19:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:28:44.309 04:19:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:44.309 04:19:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.309 04:19:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.309 04:19:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.309 04:19:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.309 04:19:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.309 04:19:37 -- paths/export.sh@5 -- $ export PATH 00:28:44.309 04:19:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.309 04:19:37 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:44.309 04:19:37 -- common/autobuild_common.sh@486 -- $ date +%s 00:28:44.309 04:19:37 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728793177.XXXXXX 00:28:44.309 04:19:37 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728793177.h3RiwM 00:28:44.309 04:19:37 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:28:44.309 04:19:37 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:28:44.309 04:19:37 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:44.309 04:19:37 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:44.309 04:19:37 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:44.310 04:19:37 -- common/autobuild_common.sh@502 -- $ get_config_params 00:28:44.310 04:19:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:28:44.310 04:19:37 -- common/autotest_common.sh@10 -- $ set +x 00:28:44.310 04:19:37 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:44.310 04:19:37 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:28:44.310 04:19:37 -- pm/common@17 -- $ local monitor 00:28:44.310 04:19:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.310 04:19:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:44.310 04:19:37 -- pm/common@25 -- $ sleep 1 00:28:44.310 04:19:37 -- pm/common@21 -- $ date +%s 00:28:44.310 04:19:37 -- pm/common@21 -- $ date +%s 00:28:44.310 04:19:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728793177 00:28:44.310 04:19:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728793177 00:28:44.310 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728793177_collect-cpu-load.pm.log 00:28:44.310 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728793177_collect-vmstat.pm.log 00:28:45.253 04:19:38 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:28:45.253 04:19:38 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:28:45.253 04:19:38 -- spdk/autopackage.sh@14 -- $ timing_finish 00:28:45.253 04:19:38 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:45.253 04:19:38 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:28:45.253 04:19:38 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:45.253 04:19:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:45.253 04:19:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:45.253 04:19:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:45.253 04:19:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.253 04:19:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:45.253 04:19:38 -- pm/common@44 -- $ pid=82976 00:28:45.253 04:19:38 -- pm/common@50 -- $ kill -TERM 82976 00:28:45.253 04:19:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.253 04:19:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:45.253 04:19:38 -- pm/common@44 -- $ pid=82977 00:28:45.253 04:19:38 -- pm/common@50 -- $ kill -TERM 82977 00:28:45.253 + [[ -n 5027 ]] 00:28:45.253 + sudo kill 5027 00:28:45.524 [Pipeline] } 00:28:45.539 [Pipeline] // timeout 00:28:45.543 [Pipeline] } 00:28:45.557 [Pipeline] // stage 00:28:45.562 [Pipeline] } 00:28:45.575 [Pipeline] // catchError 00:28:45.583 [Pipeline] stage 00:28:45.585 [Pipeline] { (Stop VM) 00:28:45.597 [Pipeline] sh 00:28:45.938 + vagrant halt 00:28:48.479 ==> default: Halting domain... 00:28:53.783 [Pipeline] sh 00:28:54.063 + vagrant destroy -f 00:28:56.607 ==> default: Removing domain... 00:28:56.881 [Pipeline] sh 00:28:57.167 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:57.177 [Pipeline] } 00:28:57.192 [Pipeline] // stage 00:28:57.197 [Pipeline] } 00:28:57.210 [Pipeline] // dir 00:28:57.216 [Pipeline] } 00:28:57.230 [Pipeline] // wrap 00:28:57.236 [Pipeline] } 00:28:57.248 [Pipeline] // catchError 00:28:57.257 [Pipeline] stage 00:28:57.260 [Pipeline] { (Epilogue) 00:28:57.272 [Pipeline] sh 00:28:57.558 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:02.849 [Pipeline] catchError 00:29:02.851 [Pipeline] { 00:29:02.864 [Pipeline] sh 00:29:03.149 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:03.149 Artifacts sizes are good 00:29:03.160 [Pipeline] } 00:29:03.174 [Pipeline] // catchError 00:29:03.185 [Pipeline] archiveArtifacts 00:29:03.193 Archiving artifacts 00:29:03.302 [Pipeline] cleanWs 00:29:03.326 [WS-CLEANUP] Deleting project workspace... 00:29:03.326 [WS-CLEANUP] Deferred wipeout is used... 00:29:03.348 [WS-CLEANUP] done 00:29:03.350 [Pipeline] } 00:29:03.365 [Pipeline] // stage 00:29:03.371 [Pipeline] } 00:29:03.384 [Pipeline] // node 00:29:03.389 [Pipeline] End of Pipeline 00:29:03.429 Finished: SUCCESS